You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hey! The speedup happens in the next line: x0_pred = self.denoiser.predict(nn_inputs, batch_size=self.batch_size). Here we only have to call .predict once on the concatenated matrix which is faster than calling .predict twice on conditional and unconditional inputs.
x0_pred_label is the prediction conditioned on the text embedding and x0_pred_no_label is the unconditional prediction (where the text embedding input is 0).
hello,
I want to ask this code in
diffuser.py
why it can speed inference?
could you explain it to me?
The text was updated successfully, but these errors were encountered: