Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

http POST support #499

Closed
wants to merge 647 commits into from
Closed

http POST support #499

wants to merge 647 commits into from

Conversation

fabricedesre
Copy link
Contributor

@Arqu I could get some help with something in the http handler: the new put_handler itself compiles fine but anytime I use the unixfs builder in that function (uncommenting any of the let file = create_file_from_xxx line) I get a trait bound error from the route setup that I don't understand:

error[E0277]: the trait bound `fn(Extension<Arc<core::State<T>>>, http::Request<hyper::Body>) -> impl futures::Future<Output = Result<response::GatewayResponse, GatewayError>> {put_handler::<T>}: Handler<_, _>` is not satisfied
   --> iroh-gateway/src/handlers.rs:71:33
    |
71  |         .route("/:scheme/", put(put_handler::<T>))
    |                             --- ^^^^^^^^^^^^^^^^ the trait `Handler<_, _>` is not implemented for `fn(Extension<Arc<core::State<T>>>, http::Request<hyper::Body>) -> impl futures::Future<Output = Result<response::GatewayResponse, GatewayError>> {put_handler::<T>}`
    |                             |
    |                             required by a bound introduced by this call
    |
    = help: the trait `Handler<T, ReqBody>` is implemented for `Layered<S, T>`
note: required by a bound in `axum::routing::put`
   --> /home/fabrice/.cargo/registry/src/github.com-1ecc6299db9ec823/axum-0.5.17/src/routing/method_routing.rs:402:1
    |
402 | top_level_handler_fn!(put, PUT);
    | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ required by this bound in `axum::routing::put`
    = note: this error originates in the macro `top_level_handler_fn` (in Nightly builds, run with -Z macro-backtrace for more info)

Thanks!

dignifiedquire and others added 30 commits October 10, 2022 20:50
…nt (n0-computer#317)

use store exported path in iroh-one

closes #255, closes #309
* feat(gateway): fetch a DAG recursively as CAR.

* test(gateway): add test for recursive car fetch

also fixes clippy and code style issues

* refactor(resolver): use mapper in recursive resolve

* style(resolver): fix comment

* style(resolver): clippy fix

* style(gateway): clippy fix and test fix
* fix(unixfs): ensure links get persisted in store

This had to be merged with some other changes related to the size info.
Introduce a Block struct that has data, hash and links for DRY.

* test(unixfs): validate blocks in all roundtrip tests

Validate blocks in all roundtrip tests to make sure that the blocks that
are returned from file.encode or dir.encode are internally consistent.

* fix(store): only create new id when needed

The old code was creating new ids even if we already knew the cid, which
caused some havoc.
This allows adding large files without getting an error. It is a stopgap until
we have a persistent kad store.
## move RPC client construction into `iroh` (read: `iroh-api`)

I've moved creation of the RpcClient away from the CLI into `iroh` (to be `iroh-api`). This means that the API gains a bunch of dependencies and the CLI loses them.

Doing this made me realize that the api should own the rpc client, so changed from a reference (with a lifetime) to ownership.

## `iroh get` start of implementation

Start of implementation of `iroh get` with a few mocked CLI tests

## `iroh` -> `iroh-api` crate

The plan is to have an `iroh` crate that contains the CLI and an `iroh-api` crate that contains the Rust API. Here we've done half of that renaming: `iroh` is renamed to `iroh-api`. I haven't done the second half, renaming `iroh-ctl` to `iroh`, as that would make this commit too different to read.
* Rename the iroh-ctl crate to the iroh crate

* test: test --version flag does what it should now
Acknowledges that a `raw` node is a valid `UnixfsNode` and therefore can be resolved as a file in the gateway
@fabricedesre fabricedesre changed the title WIP: http PUT support http POST support Nov 29, 2022
Copy link
Collaborator

@Arqu Arqu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've reviewed this mostly to keep the conversation going.
@b5 what's our roadmap/product stance on the writable gateway story?

// If this gateway is not writable, return a 400 error.
if !state.config.writeable_gateway() {
return Err(GatewayError::new(
StatusCode::BAD_REQUEST,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of 400 Bad Request we should do 405 Method Not Allowed

#[tracing::instrument(skip(state))]
pub async fn post_handler<T: ContentLoader + std::marker::Unpin>(
Extension(state): Extension<Arc<State<T>>>,
// Path(params): Path<HashMap<String, String>>,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's drop unused code.

));
}

// TODO: check path & headers
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should check for valid scheme, all the headers. We should also check for bad_bits in the denylist, but given the logistics we would probably first need to add it and then based on the CID respond and remove it if its in the list.

&self,
_content: T,
) -> Result<cid::Cid, anyhow::Error> {
unimplemented!()
Copy link
Collaborator

@Arqu Arqu Nov 30, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is still unimplemented. Should we allow this?
On a separate note, feels wrong to be placed in the content loader. Unsure if the content loader is evolving to be more (and whether it should) or if we should move this out and have it be handled by the gateway client and not the resolver (my preference).

@@ -168,6 +170,8 @@ pub struct File {
chunker: Chunker,
}

unsafe impl Send for File {}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not thrilled about unsafe code. @dignifiedquire what's our general policy on that front?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, this is not safe to do. Each piece of the struct needs to be made Send

@@ -35,6 +35,8 @@ pub enum ChunkerStream<'a> {
Rabin(LocalBoxStream<'a, io::Result<Bytes>>),
}

unsafe impl<'a> Send for ChunkerStream<'a> {}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, this is not safe to do. The streams need to change to BoxStream to make this work

Path(PathBuf),
}

unsafe impl Send for Content {}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this should not be needed, otherwise something is going wrong

Copy link
Contributor

@dignifiedquire dignifiedquire left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

None of the unsafe usages are actually safe to use, these need reworking

@fabricedesre
Copy link
Contributor Author

@dignifiedquire I'm not surprised the unsafe parts need to be re-done, it was mostly "compiler error" based implementation.

I'm not convinced by the overall design anyway. Like @Arqu comments in https://github.com/n0-computer/iroh/pull/499/files#r1035765976 we should settle on that question first.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.