-
Notifications
You must be signed in to change notification settings - Fork 47.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use unique thread ID for each partial render to access Context #14182
Conversation
The new context API stores the provided values on the shared context instance. When used in a synchronous context, this is not an issue. However when used in an concurrent context this can cause a "push provider" from one react render to have an effect on an unrelated concurrent react render. I've encountered this bug in production when using renderToNodeStream, which asks ReactPartialRenderer for bytes up to a high water mark before yielding. If two Node Streams are created and read from in parallel, the state of one can polute the other. I wrote a failing test to illustrate the conditions under which this happens. I'm also concerned that the experimental concurrent/async React rendering on the client could suffer from the same issue.
This first adds an allocator that keeps track of a unique ThreadID index for each currently executing partial renderer. IDs are not just growing but are reused as streams are destroyed. This ensures that IDs are kept nice and compact. This lets us use an "array" for each Context object to store the current values. The look up for these are fast because they're just looking up an offset in a tightly packed "array". I don't use an actual Array object to store the values. Instead, I rely on that VMs (notably V8) treat storage of numeric index property access as a separate "elements" allocation. This lets us avoid an extra indirection. However, we must ensure that these arrays are not holey to preserve this feature. To do that I store the _threadCount on each context (effectively it takes the place of the .length property on an array). This lets us first validate that the context has enough slots before we access the slot. If not, we fill in the slots with the default value.
React: size: 🔺+0.1%, gzip: 🔺+0.1% Details of bundled changes.Comparing: d5e1bf0...e3c6b17 react
react-dom
Generated by 🚫 dangerJS |
32efa07
to
4bb10fe
Compare
|
||
// Allocates a new index for each request. Tries to stay as compact as possible so that these | ||
// indices can be used to reference a tightly packaged array. As opposed to being used in a Map. | ||
// The first allocated index is 1. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think what we're going to do is use this same strategy on the client but primary renders like React DOM and React Native will use index 0 hard coded so they can skip any allocation and resizing logic. Then secondary renderers can use index 1 and above.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this looks correct
// If we don't have enough slots in this context to store this threadID, | ||
// fill it in without leaving any holes to ensure that the VM optimizes | ||
// this as non-holey index properties. | ||
for (let i = context._threadCount; i <= threadID; i++) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is easier to read for me:
while (context._threadCount < threadID) {
context[context._threadCount++] = context.currentValue2;
}
``
@@ -42,6 +42,9 @@ export function createContext<T>( | |||
// Secondary renderers store their context values on separate fields. | |||
_currentValue: defaultValue, | |||
_currentValue2: defaultValue, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why even keep these any more instead of using "threads" on the client too? only the lack of global coordination?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We'll probably just do this on the client too. See my other comment about primary renderers could skip the global coordination.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interestingly, we do need the defaultValue somewhere, so at least one field will remain.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(originally set here)
let oldArray = nextAvailableThreadIDs; | ||
let oldSize = oldArray.length; | ||
let newSize = oldSize * 2; | ||
if (newSize > 0x10000) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: (2 << 16) probably easier to read
return growThreadCountAndReturnNextAvailable(); | ||
} | ||
nextAvailableThreadIDs[0] = nextAvailableThreadIDs[nextID]; | ||
return nextID; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could return nextID - 1 (then +1 in free) so they appear to start at 0.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if this will mess with SMI optimizations. SMI - 1 might no longer be a SMI maybe?
Also just doing a bit of extra math is annoying. I think I'll use the primary renderer optimization as an excuse for why we shouldn't. :P
_read(size) { | ||
try { | ||
this.push(this.partialRenderer.read(size)); | ||
} catch (err) { | ||
this.emit('error', err); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this breaking?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe? Or is it a fix?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what is it fixing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I submitted a PR to fix this: #14314
'Maximum number of concurrent React renderers exceeded. ' + | ||
'This can happen if you are not properly destroying the Readable provided by React. ' + | ||
'Ensure that you call .destroy() on it if you no longer want to read from it.', | ||
); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
┏┓
┃┃╱╲ in this
┃╱╱╲╲ house
╱╱╭╮╲╲ we
▔▏┗┛▕▔ use
╱▔▔▔▔▔▔▔▔▔▔╲
invariant
╱╱┏┳┓╭╮┏┳┓ ╲╲
▔▏┗┻┛┃┃┗┻┛▕▔
also can we tweak:
Ensure that you call .destroy() on it when you are finished reading from it.
current wording sounds a bit like a special case
the fact that we're not using the storage for allocated IDs feels so wasteful 😂 |
Maybe we'll find something else to store in there. :D I suspect that we'll want to use these IDs for more things in the future. E.g. we have similar concepts in the interaction tracking. If we start using actual threads, then all our globals needs to be thread local but adding many thread locals globally is bad news so maybe things like current owner and dispatcher will use this too. |
We don't really have good coverage enough internally to really know if this is fast/safe in production environments. The best we can do at this point is probably just to release it in a patch. |
'Maximum number of concurrent React renderers exceeded. ' + | ||
'This can happen if you are not properly destroying the Readable provided by React. ' + | ||
'Ensure that you call .destroy() on it if you no longer want to read from it.' + | ||
', and did not read to the end. If you use .pipe() this should be automatic.', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
.,
Instead of denoting the terminal ID as |
return markup; | ||
try { | ||
const markup = renderer.read(Infinity); | ||
return markup; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any reason why these aren't just return renderer.read(Infinity)
?
@@ -835,6 +799,7 @@ class ReactDOMServerRenderer { | |||
while (out[0].length < bytes) { | |||
if (this.stack.length === 0) { | |||
this.exhausted = true; | |||
freeThreadID(this.threadID); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could also replace these two lines with this.destroy()
I agree with @sophiebits that it seems like a useful optimization to have thread IDs start at 0 and grow larger rather than start at the high ID and shrink towards 0 - that avoids having to fill a bunch of holes on context objects for the common case of a single thread |
I'm also curious to see if there's a meaningful performance difference between this and using a Map per renderer (especially native Map) and instead swapping in an implementation of useContext while performing partial rendering to use the correct Map |
@leebyron The IDs are increasing. They just start at 1. |
…ook#14182) * BUG: ReactPartialRenderer / New Context polutes mutable global state The new context API stores the provided values on the shared context instance. When used in a synchronous context, this is not an issue. However when used in an concurrent context this can cause a "push provider" from one react render to have an effect on an unrelated concurrent react render. I've encountered this bug in production when using renderToNodeStream, which asks ReactPartialRenderer for bytes up to a high water mark before yielding. If two Node Streams are created and read from in parallel, the state of one can polute the other. I wrote a failing test to illustrate the conditions under which this happens. I'm also concerned that the experimental concurrent/async React rendering on the client could suffer from the same issue. * Use unique thread ID for each partial render to access Context This first adds an allocator that keeps track of a unique ThreadID index for each currently executing partial renderer. IDs are not just growing but are reused as streams are destroyed. This ensures that IDs are kept nice and compact. This lets us use an "array" for each Context object to store the current values. The look up for these are fast because they're just looking up an offset in a tightly packed "array". I don't use an actual Array object to store the values. Instead, I rely on that VMs (notably V8) treat storage of numeric index property access as a separate "elements" allocation. This lets us avoid an extra indirection. However, we must ensure that these arrays are not holey to preserve this feature. To do that I store the _threadCount on each context (effectively it takes the place of the .length property on an array). This lets us first validate that the context has enough slots before we access the slot. If not, we fill in the slots with the default value.
This comment has been minimized.
This comment has been minimized.
Why didn’t tests catch this? |
@gaearon I don't know enough about this threaded SSR implementation. @sebmarkbage should be able to explain why? |
// We assume that this is the same as the defaultValue which might not be | ||
// true if we're rendering inside a secondary renderer but they are | ||
// secondary because these use cases are very rare. | ||
context[i] = context._currentValue2; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@trueadm This is meant to read the default value.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I’m not sure how the proxing works with this. Is it DEV only or both?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What proxying?
@@ -42,6 +42,9 @@ export function createContext<T>( | |||
// Secondary renderers store their context values on separate fields. | |||
_currentValue: defaultValue, | |||
_currentValue2: defaultValue, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(originally set here)
Could you please explain the testing methodology? This test prints fit('waat', () => {
const MyContext = React.createContext(10);
function Component() {
const { Consumer, Provider } = MyContext;
return (
<React.Fragment>
<Consumer>
{(value: number) => <span>{value}</span>}
</Consumer>
</React.Fragment>
)
}
console.log(ReactDOMServer.renderToString(<Component />))
}); I also checked that const React = require('react')
const ReactDOMServer = require('react-dom/server')
const MyContext = React.createContext(10);
function Component() {
const {
Consumer,
Provider
} = MyContext;
return React.createElement(React.Fragment, null, React.createElement(Consumer$
} prints |
@gaearon I'm doing this like in your second example, except I've copied over the production server bundle and pasted it into the |
Are you sure you don't have two React copies? |
@@ -42,6 +42,9 @@ export function createContext<T>( | |||
// Secondary renderers store their context values on separate fields. | |||
_currentValue: defaultValue, | |||
_currentValue2: defaultValue, | |||
// Used to track how many concurrent renderers this context currently | |||
// supports within in a single renderer. Such as parallel server rendering. | |||
_threadCount: 0, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@trueadm did you copy over the new react
package? this line is important.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh. Does this mean react-dom/server
is not compatible with react@16.0.0
? We've tried to relax this requirement during 16.x.x release line. If this doesn't work we need to start bumping peer deps again.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was it! I'm sorry, I feel stupid now :(
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@gaearon We accidentally bumped peer deps anyway.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The fix is simple enough I did it anyway. We should relax peer deps back imo.
Ignore me. I was using the wrong |
It still points out an issue though. We intended to support mismatching versions of |
This broke me in a minor version update (16.6.1 -> 16.6.3). I was calling This is probably desirable, as the previous behavior seems weird, but it is likely worth flagging as a potential breaking change in the release notes. Unfortunately, it is already out in a minor version, although this use case seems very rare. |
@nomcopter Thanks for flagging. I think we’re going to consider that a bug fix. It is not intended or desirable that context is transferred. What is worse is that it could accidentally leak a context unintentionally in similar ways as the big this PR was meant to fix. Maybe we should have some kind of Portal solution for that use case just like we did on the client. |
Regression introduced in facebook#14182 resulted in errors no longer being emitted on streams, breaking many consumers. Co-authored-by: Elliot Jalgard <elliot.j@live.se>
Filed #14292 to track it. |
Regression introduced in facebook#14182 resulted in errors no longer being emitted on streams, breaking many consumers. Co-authored-by: Elliot Jalgard <elliot.j@live.se>
Regression introduced in #14182 resulted in errors no longer being emitted on streams, breaking many consumers. Co-authored-by: Elliot Jalgard <elliot.j@live.se>
…ook#14182) * BUG: ReactPartialRenderer / New Context polutes mutable global state The new context API stores the provided values on the shared context instance. When used in a synchronous context, this is not an issue. However when used in an concurrent context this can cause a "push provider" from one react render to have an effect on an unrelated concurrent react render. I've encountered this bug in production when using renderToNodeStream, which asks ReactPartialRenderer for bytes up to a high water mark before yielding. If two Node Streams are created and read from in parallel, the state of one can polute the other. I wrote a failing test to illustrate the conditions under which this happens. I'm also concerned that the experimental concurrent/async React rendering on the client could suffer from the same issue. * Use unique thread ID for each partial render to access Context This first adds an allocator that keeps track of a unique ThreadID index for each currently executing partial renderer. IDs are not just growing but are reused as streams are destroyed. This ensures that IDs are kept nice and compact. This lets us use an "array" for each Context object to store the current values. The look up for these are fast because they're just looking up an offset in a tightly packed "array". I don't use an actual Array object to store the values. Instead, I rely on that VMs (notably V8) treat storage of numeric index property access as a separate "elements" allocation. This lets us avoid an extra indirection. However, we must ensure that these arrays are not holey to preserve this feature. To do that I store the _threadCount on each context (effectively it takes the place of the .length property on an array). This lets us first validate that the context has enough slots before we access the slot. If not, we fill in the slots with the default value.
Regression introduced in facebook#14182 resulted in errors no longer being emitted on streams, breaking many consumers. Co-authored-by: Elliot Jalgard <elliot.j@live.se>
…ook#14182) * BUG: ReactPartialRenderer / New Context polutes mutable global state The new context API stores the provided values on the shared context instance. When used in a synchronous context, this is not an issue. However when used in an concurrent context this can cause a "push provider" from one react render to have an effect on an unrelated concurrent react render. I've encountered this bug in production when using renderToNodeStream, which asks ReactPartialRenderer for bytes up to a high water mark before yielding. If two Node Streams are created and read from in parallel, the state of one can polute the other. I wrote a failing test to illustrate the conditions under which this happens. I'm also concerned that the experimental concurrent/async React rendering on the client could suffer from the same issue. * Use unique thread ID for each partial render to access Context This first adds an allocator that keeps track of a unique ThreadID index for each currently executing partial renderer. IDs are not just growing but are reused as streams are destroyed. This ensures that IDs are kept nice and compact. This lets us use an "array" for each Context object to store the current values. The look up for these are fast because they're just looking up an offset in a tightly packed "array". I don't use an actual Array object to store the values. Instead, I rely on that VMs (notably V8) treat storage of numeric index property access as a separate "elements" allocation. This lets us avoid an extra indirection. However, we must ensure that these arrays are not holey to preserve this feature. To do that I store the _threadCount on each context (effectively it takes the place of the .length property on an array). This lets us first validate that the context has enough slots before we access the slot. If not, we fill in the slots with the default value.
Regression introduced in facebook#14182 resulted in errors no longer being emitted on streams, breaking many consumers. Co-authored-by: Elliot Jalgard <elliot.j@live.se>
Fixes #13874
Alternative to #13877
This first adds an allocator that keeps track of a unique ThreadID index for each currently executing partial renderer. IDs are not just growing but are reused as streams are destroyed.
This ensures that IDs are kept nice and compact.
One minor breakage is that it is no longer safe to just let streams be GC:ed for clean up. Typically, they you would never drop a stream on the floor. It's either exhausted or errors.
This lets us use an "array" for each Context object to store the current values. The look up for these are fast because they're just looking up an offset in a tightly packed "array".
I don't use an actual Array object to store the values. Instead, I rely on that VMs (notably V8) treat storage of numeric index property access as a separate "elements" allocation.
This lets us avoid an extra indirection.
However, we must ensure that these arrays are not holey to preserve this feature.
To do that I store the
_threadCount
on each context (effectively it takes the place of the.length
property on an array, and also lets me use that pun). It's unclear whether a "real".length
would be faster if it could avoid the internal bounds check.This lets us first validate that the context has enough slots before we access the slot. If not, we fill in the slots with the default value.
This should be a fast approach, in theory, but I haven't actually confirmed that the builds don't deopt somewhere yet.
cc @leebyron