Skip to content
This repository has been archived by the owner on Nov 15, 2023. It is now read-only.

Use tokio runtime handle instead of TaskExecutor abstraction #9737

Merged
7 commits merged into from
Sep 12, 2021

Conversation

bkchr
Copy link
Member

@bkchr bkchr commented Sep 9, 2021

Before this pr we had the TaskExecutor abstraction which theoretically
allowed that any futures executor could have been used. However, this
was never tested and is currently not really required. Anyone running a
node currently only used tokio and nothing else (because this was hard
coded in CLI). So, this pr removes the TaskExecutor abstraction and
relies directly on the tokio runtime handle.

Besides this changes, this pr also makes sure that the http and ws rpc
server use the same tokio runtime. This fixes a panic that occurred when
you drop the rpc servers inside an async function (tokio doesn't like
that a tokio runtime is dropped in the async context of another tokio
runtime).

As we don't use any custom runtime in the http rpc server anymore, this
pr also removes the rpc-http-threads cli argument. If external parties
complain that there aren't enough threads for the rpc server, we could
bring support for increasing the thread count of the tokio runtime.

polkadot companion: paritytech/polkadot#3830

@bkchr bkchr added A0-please_review Pull request needs code review. B5-clientnoteworthy C1-low PR touches the given topic and has a low impact on builders. labels Sep 9, 2021
Before this pr we had the `TaskExecutor` abstraction which theoretically
allowed that any futures executor could have been used. However, this
was never tested and is currently not really required. Anyone running a
node currently only used tokio and nothing else (because this was hard
coded in CLI). So, this pr removes the `TaskExecutor` abstraction and
relies directly on the tokio runtime handle.

Besides this changes, this pr also makes sure that the http and ws rpc
server use the same tokio runtime. This fixes a panic that occurred when
you drop the rpc servers inside an async function (tokio doesn't like
that a tokio runtime is dropped in the async context of another tokio
runtime).

As we don't use any custom runtime in the http rpc server anymore, this
pr also removes the `rpc-http-threads` cli argument. If external parties
complain that there aren't enough threads for the rpc server, we could
bring support for increasing the thread count of the tokio runtime.
Comment on lines 147 to 148
.threads(1)
.event_loop_executor(tokio_handle)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I read the docs and checked the code.

I'm almost sure, but can someone confirm that this is correct?

Copy link
Member

@niklasad1 niklasad1 Sep 9, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By correct I mean that we are still answering requests in parallel.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, I was referring to

So it’s also possible to run a multi-threaded server by passing the default tokio::runtime executor to this builder and setting threads to 1.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, my second comment was just to make my first one more clear xD

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup, I believe so too!

Copy link
Member

@ordian ordian left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know how to review this properly. Ideally, this PR could use a burnin on a RPC node, but I'm not sure it's warranted.

I understand that we lose the TaskExecutor abstraction here, that might worth adding back later, but that's probably out of scope here .

Other than that, looks good to me.

@bkchr
Copy link
Member Author

bkchr commented Sep 9, 2021

I don't know how to review this properly. Ideally, this PR could use a burnin on a RPC node, but I'm not sure it's warranted.

This custom http thread number wasn't used by us, I will ping the people that requested it.

I understand that we lose the TaskExecutor abstraction here, that might worth adding back later, but that's probably out of scope here .

It was currently not used anyway and I think it may also not worked with other executors, because hyper for example requires Tokio. However, as we will refactor a lot of things, this is one of the first things that would have been gone anyway.

Other than that, looks good to me.

👍

Copy link
Member

@niklasad1 niklasad1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, but yeah would good check with users of the --rpc http threads

bkchr added a commit to paritytech/polkadot that referenced this pull request Sep 11, 2021
@bkchr
Copy link
Member Author

bkchr commented Sep 12, 2021

bot merge

@ghost
Copy link

ghost commented Sep 12, 2021

Trying merge.

@ghost ghost merged commit b674bd2 into master Sep 12, 2021
@ghost ghost deleted the bkchr-forward-tokio branch September 12, 2021 12:29
shawntabrizi added a commit to paritytech/cumulus that referenced this pull request Sep 15, 2021
shawntabrizi added a commit to paritytech/cumulus that referenced this pull request Sep 15, 2021
bkchr added a commit to paritytech/cumulus that referenced this pull request Sep 16, 2021
* Remove event pallet::metadata attributes

* Add scale-info deps, TypeInfo derives, update call variants

* Update metadata runtime APIs

* Add missing scale_info dependency, update rococo runtime API

* Add missing scale_info dependency

* Remove pushed diener patches

* Cargo.lock

* Add missing scale-info dependencies

* Fixes

* Statemint runtime fixes

* Call struct variant empty matches

* Add missing scale-info dependency

* Fixes

* scale-info 1.0

* cargo update -p xcm

* update lock

* Update Cargo.lock

* update to latest polkadot

* remove rpc_http_threads

paritytech/substrate#9737

* replace task executor with tokio handler

paritytech/substrate#9737

* fix test compilation?

* Update Cargo.lock

* cargo update

* remove unused

* Update substrate and polkadot

* Update test/client/src/lib.rs

Co-authored-by: Shawn Tabrizi <shawntabrizi@gmail.com>
Co-authored-by: Bastian Köcher <bkchr@users.noreply.github.com>
WebWizrd8 added a commit to WebWizrd8/cumulus that referenced this pull request Nov 21, 2022
* Remove event pallet::metadata attributes

* Add scale-info deps, TypeInfo derives, update call variants

* Update metadata runtime APIs

* Add missing scale_info dependency, update rococo runtime API

* Add missing scale_info dependency

* Remove pushed diener patches

* Cargo.lock

* Add missing scale-info dependencies

* Fixes

* Statemint runtime fixes

* Call struct variant empty matches

* Add missing scale-info dependency

* Fixes

* scale-info 1.0

* cargo update -p xcm

* update lock

* Update Cargo.lock

* update to latest polkadot

* remove rpc_http_threads

paritytech/substrate#9737

* replace task executor with tokio handler

paritytech/substrate#9737

* fix test compilation?

* Update Cargo.lock

* cargo update

* remove unused

* Update substrate and polkadot

* Update test/client/src/lib.rs

Co-authored-by: Shawn Tabrizi <shawntabrizi@gmail.com>
Co-authored-by: Bastian Köcher <bkchr@users.noreply.github.com>
ggwpez pushed a commit to ggwpez/runtimes that referenced this pull request Mar 10, 2023
* Remove event pallet::metadata attributes

* Add scale-info deps, TypeInfo derives, update call variants

* Update metadata runtime APIs

* Add missing scale_info dependency, update rococo runtime API

* Add missing scale_info dependency

* Remove pushed diener patches

* Cargo.lock

* Add missing scale-info dependencies

* Fixes

* Statemint runtime fixes

* Call struct variant empty matches

* Add missing scale-info dependency

* Fixes

* scale-info 1.0

* cargo update -p xcm

* update lock

* Update Cargo.lock

* update to latest polkadot

* remove rpc_http_threads

paritytech/substrate#9737

* replace task executor with tokio handler

paritytech/substrate#9737

* fix test compilation?

* Update Cargo.lock

* cargo update

* remove unused

* Update substrate and polkadot

* Update test/client/src/lib.rs

Co-authored-by: Shawn Tabrizi <shawntabrizi@gmail.com>
Co-authored-by: Bastian Köcher <bkchr@users.noreply.github.com>
ggwpez pushed a commit to ggwpez/runtimes that referenced this pull request Mar 14, 2023
* Remove event pallet::metadata attributes

* Add scale-info deps, TypeInfo derives, update call variants

* Update metadata runtime APIs

* Add missing scale_info dependency, update rococo runtime API

* Add missing scale_info dependency

* Remove pushed diener patches

* Cargo.lock

* Add missing scale-info dependencies

* Fixes

* Statemint runtime fixes

* Call struct variant empty matches

* Add missing scale-info dependency

* Fixes

* scale-info 1.0

* cargo update -p xcm

* update lock

* Update Cargo.lock

* update to latest polkadot

* remove rpc_http_threads

paritytech/substrate#9737

* replace task executor with tokio handler

paritytech/substrate#9737

* fix test compilation?

* Update Cargo.lock

* cargo update

* remove unused

* Update substrate and polkadot

* Update test/client/src/lib.rs

Co-authored-by: Shawn Tabrizi <shawntabrizi@gmail.com>
Co-authored-by: Bastian Köcher <bkchr@users.noreply.github.com>
This pull request was closed.
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
A0-please_review Pull request needs code review. C1-low PR touches the given topic and has a low impact on builders.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants