Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug: weird useTransition behaviour #19473

Closed
vkurchatkin opened this issue Jul 28, 2020 · 27 comments
Closed

Bug: weird useTransition behaviour #19473

vkurchatkin opened this issue Jul 28, 2020 · 27 comments
Labels
Status: Unconfirmed A potential issue that we haven't yet confirmed as a bug

Comments

@vkurchatkin
Copy link

vkurchatkin commented Jul 28, 2020

A couple examples of unexpected and seemingly broken useTransition behaviour

1. useTransition doesn't work, if the component is suspended before useTransition call:

Sandbox: https://codesandbox.io/s/gracious-lumiere-242bo?file=/src/First.js

Code:

export default function First() {
  cache.read(); // <-- read from cache first
  const [startTransition, isPending] = React.unstable_useTransition(config); // <-- call useTransition after

  const [rev, setRev] = React.useState(0);

  function reload() {
    cache.expire();

    startTransition(() => {
      setRev(rev => rev + 1);
    });
  }

  return (
    <div>
      {isPending ? "Pending" : "Not pending"}
      <br />
      {rev}
      <br />
      <button onClick={reload}>Reload</button>
    </div>
  );
}

The current behavior

Fallback is always shown

The expected behavior

Fallback is never shown

This issue is somewhat fixed if cache read and useTransition are just swapped, but then we get issue number 2:

2. useTransition prevents fallback, but isPending is either always false, or true for small period of time:

Sandbox: https://codesandbox.io/s/gracious-lumiere-242bo?file=/src/Second.js

Code:

export default function Second() {
  const [startTransition, isPending] = React.unstable_useTransition(config); // <-- call useTransition first
  cache.read();  // <-- read from cache after

  const [rev, setRev] = React.useState(0);

  function reload() {
    cache.expire();

    startTransition(() => {
      setRev(rev => rev + 1);
    });
  }

  return (
    <div>
      {isPending ? "Pending" : "Not pending"}
      <br />
      {rev}
      <br />
      <button onClick={reload}>Reload</button>
    </div>
  );
}

This works much better, but still has some issues

The current behavior

  • fallback is still sometimes shown, mostly when you click the button more than once quickly
  • isPending is either always false, or true for small period of time (you can see the flash)

The expected behavior

  • fallback is never shown
  • isPending is true while suspended

This is probably the same issue as #19046

React version: "0.0.0-experimental-4c8c98ab

@vkurchatkin vkurchatkin added the Status: Unconfirmed A potential issue that we haven't yet confirmed as a bug label Jul 28, 2020
@vkurchatkin vkurchatkin changed the title Bug: weird useTransition behaviour Bug: weird useTransition behaviour Jul 28, 2020
@gaearon
Copy link
Collaborator

gaearon commented Jul 28, 2020

I think the issue is that cache.expire is not a part of the transition.

For useTransition to work, React needs to be able to continue rendering either of the two "worlds" independently. But a mutation like cache.expire() means there is no way to re-render a component in the "old world". The old cache is gone. So the app can't remain interactive. And we can't show a pending state.

To solve this, you would need to implement cache eviction differently. One way to model it is to put the cache itself into state of some top-level component. Then invalidating the cache is setCache(new Cache()) inside startTransition. This ensures the old cache is still alive and can be used until the transition is done.

In a real app you'll likely want to tie the cache lifetime to navigation somehow. I.e. navigating to a link should probably use a new cache while pressing "Back" should reuse an existing cache. There is a lot to work out there, which is part of the reason we're not yet releasing Suspense for data fetching for broad consumption.

@vkurchatkin
Copy link
Author

Thanks for the response. This makes a lot of sense and is good reminder that Suspense is very nuanced.

But a mutation like cache.expire() means there is no way to re-render a component in the "old world"

Does this mean that cache should essentially be immutable, or there is a middleground?

One way to model it is to put the cache itself into state of some top-level component.

Unfortunately, that doesn't work for me. I have only one Suspense at the top level at my goal is to never show its fallback after initial render. So all these caches have to be above this Suspense at the very top of the tree, which violates current architecture pretty badly.

How about the following strategy:

const [time] = React.useState(null);
const data = cache.read({ time });  // time means that we want cache record not older than this time
// if time is null then any record is fine
 
function reload() {
    cache.expire(); // doesn't delete the record, but updates it's expiration time to current time

   startTransition(() => {
        setTime(Date.now());
   });
}

// ...

There is a lot to work out there, which is part of the reason we're not yet releasing Suspense for data fetching for broad consumption.

I know that, but I'm willing to be an early adopter and prepared for the consequences.

@vkurchatkin
Copy link
Author

vkurchatkin commented Jul 28, 2020

Or maybe something like this:

const [resource, setResource] = React.useState(() => cache.get(key));

const data = resource.read(); 

function reload() {
   cache.expire(key);
   setResource(cache.get(key));
}

@gaearon
Copy link
Collaborator

gaearon commented Jul 28, 2020

Have you looked at Relay Hooks and its concept of Entrypoints and Fragment References?

@gaearon
Copy link
Collaborator

gaearon commented Jul 28, 2020

Does this mean that cache should essentially be immutable, or there is a middleground?

In particular, in Relay, the cache is mutable. I'll ask someone from Relay to briefly describe how they make it work.

@josephsavona
Copy link
Contributor

josephsavona commented Jul 28, 2020

Relay has a mutable cache with support for cache eviction (GC) and Suspense. The key to making this work is to only evict data from the cache once you know it's no longer needed. In the original post, the sequence is to evict first (cache.evict()), then re-render with a useTransition. Because the data is evicted first, if the re-render suspends, there is no data left in the cache. In other words, data is getting evicting too early, while it's still needed.

The high-level approach is to wait to evict entries from the cache until you've committed the refetch (until the transition completes) OR to use React state to capture the previous values. The latter is likely simpler and requires wrapping access to the cache in a hook, which can keep state. So instead of directly reading the cache in your component, you might have a const [data, refetch] = useCache(cache) hook. The idea is that the hook would keep the most recent value in state. The "refetch" function would immediately evict the cache and setState(cache.read()). Because the cache values are captured in state, React will hold onto the previous value until the refetch commits.

function useCache(cache) {
  const [cacheState, setCacheState] = useState(() => cache.read());
  const refetch = useCallback(() => {
    cache.evict();
    // ...actually kick off the refetch here...
    setCacheState(cache.read());
  }, [cache]);
  return [cacheState, refetch]
}

@vkurchatkin
Copy link
Author

Have you looked at Relay Hooks and its concept of Entrypoints and Fragment References?

I guess I should, thanks for the tip. I'm not very familiar with Relay, but I think it is somewhat what close to what I want to get: mutable shared cache that acts as a source of truth for normalized entities (somewhat like what people do with Redux), but Suspense-enabled.

@josephsavona thanks a lot. A couple of questions:

  1. setCacheState(cache.read()); - not sure how this could work. This inside of a callback, so cache.read() would just throw a promise and nothing would happen. Or am I missing something?

Relay has a mutable cache with support for cache eviction (GC) The key to making this work is to only evict data from the cache once you know it's no longer needed.

Any pointers about that? The only thing I can think of is to use effects to do reference counting of sorts:

const data = cache.read(key);

React.useEffect(() => {
   cache.retain(key);

  return () => {
      cache.release(key);
  };
}, [key]);

This seems to be problematic, since we actually need to call cache.retain(key) much earlier, as soon as we read, preferably, but then we can't actually guarantee that cache.release is going to be called the same number of times, or called at all even.

And finally, are there some set of rules for this stuff, like with everything else for React? So far I see the following:

  1. Cache itself can be mutable (i.e. new Map() is fine)
  2. Cache records can also be mutable, so it is ok if cache.read() returns different values on subsequent renders
  3. Cache records, however, can't be deleted completely until they are used by any component, so if any kind of cache eviction is need, it is required to track the keys that are in use currently and only evict those that are not

@josephsavona
Copy link
Contributor

josephsavona commented Jul 28, 2020

setCacheState(cache.read()); - not sure how this could work. This inside of a callback, so cache.read() would just throw a promise and nothing would happen. Or am I missing something?

Ah I see, I wasn't quite sure how the API for your cache worked. In that case the state you would store would have to be a union of Resolved(Value) | Error | Pending(Promise):

type CacheValue<T> = 
  {kind: 'resolved', value: T}
  | {kind: 'error', error: Error}
  | {kind: 'pending: promise: Promise<void>};
type Cache<T> = {
  // note changing to get(). this doesn't suspend, it's the hook that suspends
  // based on the cache value type.
  get() => CacheValue<T>, 
};

function useCache<T>(cache) {
  const [cacheState, setCacheState] = useState<CacheValue<_>>(cache.get()); 
  const refetch = useCallback(() => {
    cache.evict();
    // initiate refetch here, calling `setCacheState()` when done 
    setCacheState(cache.get()); // safe to call get() here now
  }, [cache]);
  if (cacheState.kind === 'error') {
    throw cacheState.error; // throw error to bubble to error boundary
  } else if (cacheState.kind === 'pending') {
    throw cacheState.promise; // suspend
  } else /* kind === 'resolved' */ {
    return [cacheState.value, refetch] // resolved, return the value
  }
}

@josephsavona
Copy link
Contributor

Any pointers about that? The only thing I can think of is to use effects to do reference counting of sorts:

Our solution for this is Relay EntryPoints, and the general pattern of "render-as-you-fetch" described in the React docs and some recent blog posts. The key idea is that you initiate data-fetching from before you render in an imperative event-handler, and then you immediately start rendering w/o waiting for the data to be ready. If when you render it's still pending, you suspend. The key here is that the event handler provides a point to manage the lifetime of the resource. So for example you might have a router, or a tab manager component, which outlives each child (route/tab) and can act as a point to hold onto the data for the current route/tab for as long as necessary.

@vkurchatkin
Copy link
Author

@josephsavona I see, so essentially the same code that kicks off prefetching also manages prevents cache eviction. I would imagine it somewhat like this:

React.useEffect(() => {
  cache.prefetch(key);
  cache.retain(key);

 return () => {
   cache.release(key);
 };
}, [key]);

Unfortunately, this requires using "render-as-you-fetch" always, but I need to be able to use "fetch-on-render" when needed. "Render-as-you-fetch" is cool in theory, but requires complex infrastructure. It is so much easier to just use "fetch-on-render" and use prefetching as an optimisation technique when needed.

@arackaf
Copy link

arackaf commented Jul 31, 2020

@gaearon "For useTransition to work, React needs to be able to continue rendering either of the two "worlds" independently. But a mutation like cache.expire() means there is no way to re-render a component in the "old world"."

But the old world is in fact still rendering, correctly, it's just that useTransition is not keeping the pending flag as true while the suspense is resolved.

@vkurchatkin
Copy link
Author

But the old world is in fact still rendering, correctly, it's just that useTransition is not keeping the pending flag as true while the suspense is resolved.

I guess it makes sense that either this happens or the fallback is shown, since it's impossible to rerender "old world", but with isPending true. What surprises me is that isPending flashes sometimes, but I guess that happens when the "new world" render is committed initially.

@arackaf
Copy link

arackaf commented Jul 31, 2020

Arrrggghhhh so you need to eject the cache in the "new" world, but keep it around in the "old" world. Which is why @gaearon said to keep your cache in state. I understand now.

@gaearon
Copy link
Collaborator

gaearon commented Jul 31, 2020

Yup!

@vkurchatkin
Copy link
Author

to keep your cache in state

I'm starting to think that is is pretty much the only solution that guarantees correctness when reading from external mutable source, since it allows React to take care of versioning

@arackaf
Copy link

arackaf commented Jul 31, 2020

Right. And of course your hook will abstract that from your app code. Doing so correctly being the tricky part!

@vkurchatkin
Copy link
Author

vkurchatkin commented Jul 31, 2020

Also I've noticed that if you do need to force a rerender for anything (like I did to trigger a refetch), your code is very likely incorrect

@gaearon
Copy link
Collaborator

gaearon commented Jul 31, 2020

@arackaf I'd appreciate if we could keep this discussion on topic and avoid sarcasm. :-) It's generally expected that if you integrate with an undocumented feature that is in active development, doesn't have the best practices worked out, and is missing some fundamental pieces (such as a reference caching implementation), there will be integration issues and misunderstandings. There will also be bugs and questions that don't have clear answers.

@arackaf
Copy link

arackaf commented Jul 31, 2020

@gaearon I deleted the comment. To be frank though, it would have been nice to have a bit more guidance for library authors. You guys dropped some cool data fetching demos many months ago, but with no guidance at all for how userland library folks would upgrade to take advantage, and there's been no small amount of pain in figuring it out on our own. I get that not everything is done, but some sort of living document with basic best practices, ie exactly what you described above for cache eviction, would have been a lifesaver.

@vkurchatkin
Copy link
Author

Ok, I think we can close it. It turned out be more of a discussion than a bug report, but it was truly enlightening.

@gaearon
Copy link
Collaborator

gaearon commented Jul 31, 2020

@arackaf

We'll definitely share the guidelines when the overall story is more fleshed out. I get that it's annoying to wait for, but we also don't want to create noise now precisely because, as you said, we've done that too early in the past with demos, and that caused some churn. In particular, we expect to make some progress in the caching area, so that many solutions would be able to rely on our cache (or at least, some shared infrastructure). In that case, instructing people to implement one manually today seems like creating more extra work and churn, and as you noted, it does get pretty mind-bendy doing everything yourself.

@arackaf
Copy link

arackaf commented Aug 5, 2020

@gaearon given the wonderful explanation you provided here: https://twitter.com/dan_abramov/status/1290389569788092416

Can you elaborate a bit why this use case doesn't work? So you have transition state changes which suspend. But your non-transition state changes (ie the prior current state) also suspends.

It would seem the correct behavior there would be to show the Suspense boundary, wouldn't it? Or I guess you all could also just show the pending flag?

But is it correct behavior for React to do neither of those things, and just keep showing the prior state?

@gaearon
Copy link
Collaborator

gaearon commented Aug 5, 2020

Which of the cases described in the OP are you referring to? First or second one?

@arackaf
Copy link

arackaf commented Aug 5, 2020

@gaearon second one.

@gaearon
Copy link
Collaborator

gaearon commented Aug 5, 2020

OK, I can dig into this. I haven't because usually I stop looking as soon as I see the rules being broken. (In this case, a mutation that makes it impossible to re-render the first version.)

@gaearon
Copy link
Collaborator

gaearon commented Aug 5, 2020

I added a few logs and increased the delay so it's clearer what's happening:

https://codesandbox.io/s/wonderful-murdock-1votb?file=/src/Second.js

  console.log("trying revision " + rev + ". pending: " + isPending);
  try {
    cache.read();
  } catch (x) {
    console.log("-- NOPE! I got suspended.");
    throw x;
  }
  console.log("-- YES! This will get to the screen!");

The first render goes as expected:

trying revision 0. pending: false 
-- NOPE! I got suspended. 
trying revision 0. pending: false 
-- YES! This will get to the screen! 

We see the initial fallback (nothing to show) and then retry and it works.

Then we press the button.

trying revision 0. pending: true 
-- NOPE! I got suspended. 
trying revision 1. pending: false 
-- NOPE! I got suspended. 

(... delay while nothing happens onscreen for a few seconds ...)

trying revision 0. pending: true 
-- YES! This will get to the screen! 
trying revision 1. pending: false 
-- YES! This will get to the screen! 

Let's break it down.

When we pressed the button, we did two things:

  1. Mutate the cache without the possibility to re-render (breaking the rule)
  2. Start a rev state update in a transition
    cache.expire();

    startTransition(() => {
      setRev(rev => rev + 1);
    });

React knows there's an upcoming transition. This is why the first thing it tries to do is to render the busy indicator, flipping the isPending flag — while still keeping the old rev state set to 0:

trying revision 0. pending: true 

But because we mutated the cache, that suspends. (A cache should be written in a way to avoid that, as explained above.)

-- NOPE! I got suspended. 

So we can't actually get the render output to show the busy indicator. I think it's a valid question why this doesn't lead to the fallback showing. I think it's similar to asking why mutating state doesn't lead to a re-render in general. React doesn't "know" when you mutate state. Similarly, it doesn't "know" you've decided to mutate the cache in a way that makes re-rendering the past state impossible.

Note how if you add a button that makes a setState (even on some other piece of state), as soon as you click it, the fallback will show. This is because from React's point of view, there is now a reason to re-render. Maybe this is something we could be more resilient to, but it's not clear that it's actually useful to "better support" the cases where the rules are being broken.

Next thing (after we failed to show a pending state), we try to actually render the new rev value:

trying revision 1. pending: false 
-- NOPE! I got suspended. 

That suspends, as intended. Then we wait.

Finally, because the Promise resolves, we get to render the pending state (rev: 0, isPending: true), and immediately, we're able to render the next state (rev: 1, isPending: false):

trying revision 0. pending: true 
-- YES! This will get to the screen! 
trying revision 1. pending: false 
-- YES! This will get to the screen! 

So the pending state may flicker but we immediately jump to the next version.

Hope this clarifies a bit.

TLDR: Mutating a cache without ability to render past values is generally broken, similar to how mutating state is generally broken, and so it leads to surprising results.

@arackaf
Copy link

arackaf commented Aug 5, 2020

Whoa - thanks a ton @gaearon. Super useful explanation.

The only thing I’d suggest is that being more resilient to this wouldn’t necessarily mean “supporting broken use cases” better, but rather might help expose them better. So rather than “why does isPending flicker, this must be a bug” it’d be more “why does the fallback show” which is slightly more indicative of broken user code.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Status: Unconfirmed A potential issue that we haven't yet confirmed as a bug
Projects
None yet
Development

No branches or pull requests

4 participants