Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Client documentation is too limited #1571

Open
dekellum opened this issue Jun 16, 2018 · 12 comments
Open

Client documentation is too limited #1571

dekellum opened this issue Jun 16, 2018 · 12 comments
Labels
A-client Area: client. A-docs Area: documentation.

Comments

@dekellum
Copy link

dekellum commented Jun 16, 2018

A few aspects of this were discussed in a users forum thread, but based on initial feedback in PRs #1565 and #1568, it seems a more clear problem statement is needed.

Problem Statement

The current examples/client.rs, its guide explanation, and the client module doctest use run and no other alternatives are shown. In order to use run, these utilize futures::future::lazy to move the client into a single request future. Outwardly this makes things look minimal, clean and friendly, but as a pattern this is a dead end for building an application beyond the most trivial, single request case:

  • The Client in this formulation can't be reused for more than the single request, in order to take advantage of important features like connection keep-alive.

  • Methods run as used, or block_on, are inappropriate in the runtime context. Only spawn is non-blocking, but no current client example shows that usage.

  • Its not obvious why lazy is needed or what the shutdown requirements of the runtime and Client are. Omitting lazy or moving the Client outside of the future can result in these examples halting (never shutting down.)


Code examples could be improved to some extent with comments. Going further, in #1565 I propose using block_on in the doctest, and in #1568: spawn and explicit shutdown in the examples/client.rs. Alternatively, instead of replacing the run + lazy examples, these could be adapted to include the alternatives along side.

@seanmonstar
Copy link
Member

I agree with the sentiment of better explaining how to execute asynchronous client futures. As stated in the linked PRs, I'm not sure using block_on is the best thing to teach users, but it definitely is too tricky that the examples use futures::lazy without explaining why.

When looking at the doc example, what would users likely be trying to do? Just getting a simple fetch in fn main working? Or perhaps trying to tie in some asynchronous requests with the rest of an app?

Perhaps the docs the can be updated to suggest looking at the examples/client.rs for a "here's a simple fn main example", and that example could be updated with a comment about why rt::lazy is used. Then, the doc example could instead show that the futures should be spawned, removing rt::run and rt::lazy, and just using rt::spawn.

@dekellum
Copy link
Author

dekellum commented Jun 21, 2018

"What would [new] users [of the Client] likely be trying to do?"

I think we've covered all of these potential starting points/use cases:

  1. Unit tests of client usage (ala TDD, even if its for testing a hyper Server application)
  2. Initially simple main for experimentation (scaling to a real CLI)
  3. client requests in an existing asynchronous server or other tokio/futures application

For (1) I'm surprised and would like to better understand your reluctance to include a block_on example. For my body-image testing, I've found that pattern quite useful, even for non-trivial tests:

dekellum/body-image@2c77175#diff-31bbf71c54bca98a0ae3d40a327af940R642

As I'm sure you are aware, you can't just replace printing in the current example or doctest with asserts for unit tests. Asserts need to be performed in the test thread to properly panic the test runner. I would also imagine you've built up a much more elaborate testing harness in hyper itself, but as far a I know, you haven't packaged that for end user reuse? Thus end users need to start from scratch.

In all of these use cases I think its important that the first bullet of this problem statement is addressed so that these aren't dead ends to achieve a real application. My proposals do just that for use cases (1) and (2), as I've learned ways of doing it, the hard way, without a particularly helpful guide. It would also be nice to see an example of that in the full setting of use case (3).

@seanmonstar
Copy link
Member

seanmonstar commented Jun 23, 2018

I'm sorry if my reluctance seems rude, I'm not trying to be! Throughout the lifetime of hyper 0.11, and future.wait() and core.run(fut), the issue tracker has received bug reports of when users inappropriately used those synchronous tools. I've gotten plenty of personal email asking about the same, and people would show in the IRC channel likewise having the same confusion.

So, I'm afraid of promoting these blocking tools, as they can easily lead to bugs, and then people blame hyper for being a piece of crap. ;)

I do agree that block_on is very useful when writing tests, but I personally feel that the docs and examples should be guiding people towards using the Future combinators whenever they don't have the value right away. It seems to come up super often in server, when people want to return a Response, but only after getting the body, so they reach for a blocking utility, and harm the server (if it still works at all).

@Crandel
Copy link

Crandel commented Sep 23, 2018

Can you update documentation and show how to extract body from response?
I need to parse response and I don`t have any idea how to do it. Never was a problem to using Python or Go for this simple action.
I need to use https (thanks to Google it`s a default method for all popular websites) and authorization headers, extract body and parse string, using some regex. And I need to do it for several websites and save result into the file

@hwchen
Copy link

hwchen commented Oct 5, 2018

Just want to note that this modified example in #1568 was important for helping me figure out 3) from above, sending requests from a client with an existing hyper server. I had thought about going down that path, and that example pushed me towards a solution.

@Crandel
Copy link

Crandel commented Oct 5, 2018

@hwchen Yes, I saw this example, but it doesn`t help if you need to call HTTPS website(most of them use HTTPS) with authentication and other custom headers. The example should be close to real life and real use cases.

@hwchen
Copy link

hwchen commented Oct 5, 2018

@Crandel sorry, I was responding to the thread in general, and not specifically to your concerns. I think that if you want to have a user-friendly client, have you looked at https://github.com/seanmonstar/reqwest, which is an ergonomic layer over hyper? The experience is a lot closer to Python's request or Go.

Hyper might be a little lower level than you need.

@Darksonn
Copy link
Contributor

Darksonn commented Nov 6, 2018

As far as I can see when looking at the source code, Client appears to be fully internally reference counted, thus meaning that a Client could be cheaply cloned when needed several places.

If this is a suggested pattern for using the client, it should probably be mentioned in the documentation.


Regardin block_on, I'd like to mention an issue I've run into while testing my library which uses hyper under the hood:
I created a rather complicated future using a bunch of joins and and_thens, which resulted in the type length in the compiler exponentially exploding and requiring annotations like #![type_length_limit="4194304"] to compile.
I imagine this is related to this issue on the rust compiler.

Now, large parts of it could be rewritten to a bunch of block_on and avoid the issue.. I realize this might not be the best way to handle it, and maybe one should be doing something else, but I'm not sure what that way is.

@frenchtoastbeer
Copy link

I'm writing an asynchronous client application based on hyper, and wow - has this been painful. I'm writing an app that takes a stack of queries to a single server, and issues them up to x num of concurrent requests at a time.

I've achieved getting x num of concurrent requests to be sent at a server, but I don't yet have all the request results. Based on what I've learned from the examples, its like I either can use the request status & headers, or I can use the request body, but not both.

It's been... tough. I want the results all wrapped in a response type struct (which includes the body) and then sent to another thread via mpsc channel.

What I'd really like to see is some asynchronous client example that gives me some kind of environment (I'm guessing via a .map call? ) which has the full request available.

So far, all the examples just immediately print return status, headers and then enter a completely different 'task' to chunk the response body and handle that separately.

I just want access to the full response, at one point in time. By "full response" I mean return status, the headers, and the body... the full response. Instead, I've spent over ten hours just reading up on tokio trying to figure out what futures are, or what the hell concat2 is even returning, all while pondering if this is lazily evaluated. Or even map, for that matter - is it lazy? How can I ensure a web request is finished? I feel like "let body = res.into_body().concat2();" should allow me to at least do a "tx.try_send( body.to_vec() )" but that doesn't work.

I'm a novice in rust, and way out of my league apparently, but even from where I'm standing I'm pretty sure an example of an asynchronous app that creates a "full" response could wipe away a 1000 tears.

@Darksonn
Copy link
Contributor

@frenchtoastbeer I wrote an example for you here. I believe you missed the into_parts method.

As for what futures are.. Well a future is just "some task that could be performed, and which has this result if we performed it". As for map, it's a method that "given some task we could choose to perform, return a new task that could be performed, that first does what the first task did, and then some more".

As for how to actually perform the future, you do it by using either the run method from tokio or by using the spawn method on a Runtime. You use run if you have one future you want to run, and spawn if you have several. (In fact run just uses a single spawn internally)

@frenchtoastbeer
Copy link

frenchtoastbeer commented Dec 18, 2018

@Darksonn You're incredible! I love you. A lot. Your code is so clean!?! How long did that take you? I think.... I think I can accomplish my goal now. Please include Darksonn's example in the hyper client source files.

@Darksonn
Copy link
Contributor

Darksonn commented Dec 18, 2018

@frenchtoastbeer Well I'm just glad I could help. It took me around an hour, although I've already done a bunch of stuff with futures and hyper in the past, as I've been writing a library using hyper (which has been temporarily put on hold until async fn is stable).

As for including it as an example, notice that it makes use of spawn, which was mentioned in the top post of this issue, although there should probably also be an example using the other spawn intended for usage inside a future.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-client Area: client. A-docs Area: documentation.
Projects
None yet
Development

No branches or pull requests

6 participants