Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Differences with yew #30

Closed
qm3ster opened this issue Sep 17, 2018 · 12 comments
Closed

Differences with yew #30

qm3ster opened this issue Sep 17, 2018 · 12 comments

Comments

@qm3ster
Copy link
Contributor

qm3ster commented Sep 17, 2018

What are the architectural differences to DenisKolodin/yew that are intentional, and what are incidental?
What is the project's stance on koute/stdweb?
This question was prompted by thinking of what an isomorphic network fetch would look like, and looking at https://github.com/DenisKolodin/yew/blob/master/src/services/fetch.rs

@chinedufn
Copy link
Owner

chinedufn commented Sep 18, 2018

But I really can't speak on yewall that much without having used it.

In terms of what Percy's goals are.. I think that it wants to be less of a framework and more of a very well supported recommendation on how to build web applications in Rust.

We've chosen wasm-bindgen over stdweb.wasm-bindgen since it is a bit more minimal of a foundation and is being actively developed / backed by mozilla.

wasm-bindgen was built from day 1 to take advantage of the host-bindings proposals, and so that means that Percy will get those same performance benefits.


wasm-bindgen's ecosystem already has examples of things like fetch so I'm not even sure if we'd need to do much on our side. People can just straight up leverage window.fetch like they would in a JavaScript app.


I think that Percy's only goal is to make it easy to build production grade web applications in Rust without needing to learn a ton of new concepts. And it should be dead simple to choose which pieces you use and which you decide to replace with other libraries.

The biggest milestone will be a percy new my-app-name command that generated a production grade web application that gets 90% of people 90% of the way there with what they're trying to do.

With all types of production level considerations like:

  • dev vs. prod can point to different URLs for assets
  • OAuth
  • etc, etc, etc..

But before that we'll need to learn a bit more about what a production grade Rust web application should look like (by actually using Percy ourselves and fixing points of friction)

@chinedufn
Copy link
Owner

Closing but feel free to re-open if ya have more thoughts to add!!!

@qm3ster
Copy link
Contributor Author

qm3ster commented Sep 18, 2018

re:fetch, I was talking about something that can be used both on the server side and the clientside transparently. Something like what axios provides.
The most basic place to use that would probably be in the "routing middleware", so that the same API requests can be made for SSR as would be during clientside navigation to the route.
One notable feature of such a fetch is that it should proxy authentication headers and cookies from the client, so that it can make authenticated API requests on behalf of the client.

@qm3ster
Copy link
Contributor Author

qm3ster commented Sep 18, 2018

re:wasm-bindgen vs std-web, I appreciate that bindgen is close to native and in a way contains a polyfill for the host-binding proposal, but depending on the layer at which percy application development is going to be it might be the wrong level of abstraction. Or it might be just the right one!

What level of interop with JS libraries are we aiming for?

As for std-web, it doesn't seem like its development stagnated, it seems quite active.

For percy core itself, such as the dom patching, wasm-bindgen might be the thicker abstraction, since std-web doesn't use wasm-bindgen but directly optimizes for available wasm-js communication.

@qm3ster
Copy link
Contributor Author

qm3ster commented Sep 18, 2018

I now see that the wasm-bindgen/sys_web solution is quite concise - https://github.com/rustwasm/wasm-bindgen/blob/master/examples/fetch/src/lib.rs
But we need to pull the binary response body into wasm so we can deserialize or otherwise use it here (flatbuffers/serde/arbitrary binary) and then provide the same user-facing interface during SSR.

@chinedufn
Copy link
Owner

re:fetch, I was talking about something that can be used both on the server side and the clientside transparently. Something like what axios provides.

Ahh now I see what you're saying.

At first thought I'd actually be against that idea since it would mean you'd have to wait for a bunch of async requests to finish before the page was served. I think that is best to happen on the frontend with a load spinner and the backend should do nothing more than render a string of HTML and send it down


Also stdweb already wants to port itself to be written on top of wasm-bindgen rustwasm/team#226 (comment)

@qm3ster
Copy link
Contributor Author

qm3ster commented Sep 18, 2018

Like many other ideas, I took that from Next.js (getInitialProps) / Nuxt.js (asyncData)
So it is a thing people do.

I think that is best to happen on the frontend with a load spinner

Perhaps, but that is more of a cached-single-page-app pattern, vs isomorphic which is one of this project's keywords.

Of course, SPA may be a better pattern for wasm at this point, what with no code splitting

@chinedufn
Copy link
Owner

chinedufn commented Sep 18, 2018

Hmm interesting.. I was imagining that users would be seeing absolutely nothing until these async requests finished which sounded like a poor experience, but okay it sounds like that doesn't need to be the case when you have code splitting.

what with no code splitting

Ya that's probably a blocker for exploring something like this.


Hmmm so maybe exploring code splitting could be its own issue?

There's already an example of using wasm code to call other wasm code so in theory you could split your application up into different crates, compile each crate to wasm individually and load up what you need when you need it.

That's just off the top of my head though... we'd probably have to think a bit more about the ergonomics

@qm3ster
Copy link
Contributor Author

qm3ster commented Sep 18, 2018

Going to the other module through JS seems quite a bit of painal. Better tools for that will hopefully materialize soon, and native support for wasm-wasm splitting is also in the works. That said, wasm is significantly more economical than JS in both transfer and compilation, so let's ponder that problem when we face it. In the meantime, server side rendering of a full contentful page gives us a lot of extra time to download a fat wasm without upsetting the citizens.

@qm3ster
Copy link
Contributor Author

qm3ster commented Sep 18, 2018

Another totally insane approach would be to have async fragments in the page. This way components can first give the page a placeholder, so that something appears and then follow that by a proper render, all of this streamed as dumb html a-la Facebook's BigPipe / zalando/tailor.
I am personally against this though, as that would likely only increase time-to-content or at best do nothing for it. Extremely fast, low-latency (possibly already established) connection to the APIs + fast server cpu vs mobile device are the fastest way for a device with an empty cache to get content.

SPA + fetch:

  1. Download contentless page
  2. Download and parse scripts
  3. Download and parse wasm
  4. Hydrate contentless page
  5. Make fetch requests <- only here the API server knows it's even wanted
  6. Download fetch response
  7. Update page with content <- only here content is seen
  8. Interactive page with content

SSR:

  1. Client request
  2. Server request to API server over fast connection
  3. Contentful page quickly rendered to string
  4. Contentful page downloaded by client <- the content can now be seen
  5. Download and parse scripts
  6. Download and parse wasm
  7. Hydrate contentful page with data at the end of html, no extra fetch
  8. Interactive page with content

@chinedufn
Copy link
Owner

That's a very interesting visual. It would be nice if it was easy to choose what made sense in your own situation. Because in some situations

Server request to API server over fast connection

Will not feel faster to a user than seeing something immediately with a load spinner. It's probably a bit case by case.

This certainly sounds like something that we could proof of concept to see how it feels in different scenarios though.

I think we're already recommending a structure that makes this pretty "simple" to try out.

server client crates both depend on app crate.

app has a feature for enabling either fetch or some-rust-request-library powered requests and then a super light wrapper around fetch and some-rust-request-library that the components use.

Then on the client side we'd use fetch requests, and on the server side we'd use some-rust-request-library. We keep a counter of the number of requests that are pending and don't reply to the server until all requests resolve

Of course there are probably more fine grained considerations than that but that sounds like how something like that could generally look.

@qm3ster
Copy link
Contributor Author

qm3ster commented Sep 18, 2018

too bad async/await is taking foreverish

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants