Skip to content
This repository has been archived by the owner on Sep 20, 2024. It is now read-only.

Maya: Save current scene on workfile publish #2744

Merged
merged 3 commits into from
Feb 17, 2022

Conversation

BigRoy
Copy link
Collaborator

@BigRoy BigRoy commented Feb 16, 2022

Brief description

Implementation for #2743

Testing notes:

  1. make a workfile save it, change something drastic (DONT SAVE!), publish your file
  2. ensure the published workfile contains the drastic change

Copy link
Member

@m-u-r-p-h-y m-u-r-p-h-y left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It works as expected.

The question is, if the timing in the que is right, saving the file after running the extractors, alembics etc.

Shouldn't we make sure we save the scene before any possible changes are done down the publishing line?

@BigRoy
Copy link
Collaborator Author

BigRoy commented Feb 16, 2022

The question is, if the timing in the que is right, saving the file after running the extractors, alembics etc.
Shouldn't we make sure we save the scene before any possible changes are done down the publishing line?

Extractors should (or must?) be build to be atomic operations that leave the current scene unaltered, otherwise the User's scene would break anyway since they don't reopen the scene. Technically you are correct that it might be safer from a technical standpoint to save before we're trying to do anything that might interfere with the current scene - but honestly, if you'd need to rely on that you will also always need to reopen the scene after a publish at all times. Just so you're safe.

We could move the Save before the Extraction to at least gain some extra safety for Workfile publishes. But if the local scene is broken and that is where the user usually continues there work... then we're already in a messed up state with a broken scene. I feel like we're currently at a stage it's safe to assume things are left unaltered - if not, that's a bug - since that's what is currently being relied on across all hosts.

@m-u-r-p-h-y -> Let me know if you still want me to move it to ExtractorOrder - 0.49 (or likely ExtractorOrder - 0.48 to never conflict with StopTimer plug-in).

Copy link
Member

@m-u-r-p-h-y m-u-r-p-h-y left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree it is more important to push this through to make sure we have some data integrity in place and we can discuss the philosophical matter later :)

thanks for the quick fix!

@antirotor
Copy link
Member

antirotor commented Feb 16, 2022

Now, would extraction stop on the first failed extractor?

Because if not, this should really be first thing to do. Atomicity in extractors is just a dream - beautiful one, don't take me wrong. Some of the extractors are modifying state of the scene to extract data and are cleaning after themselves. But if they fail, then the scene is saved broken.

And in that regard I might have done something terrible in another PR - I've created plugin to clean all mess extractors would produce during their work:

openpype/hosts/maya/plugins/publish/clean_nodes.py

and this goes completely against it (and I am not saing that's a bad thing, just thought it's good to mention it here)

@BigRoy
Copy link
Collaborator Author

BigRoy commented Feb 16, 2022

Now, would extraction stop on the first failed extractor?

No. Pyblish would continue by default - but a User can stop it in-between and with slow extractors Users tend to do so once an error has been detected. Pyblish only ends by default upon the end of Validation if errors occurred, as can be seen in the default test.

Pyblish can be set up to fail/stop directly on ANY errors whatsoever whenever you want - you can register a custom test. I actually recommended and discussed to do so recently here

Because if not, this should really be first thing to do

Let me push the move to ExtractorOrder in this PR.

And in that regard I might have done something terrible in another PR - I've created plugin to clean all mess extractors would produce during their work

Yes. That definitely shouldn't be the way to do that. That should be wrapped up in a context manager, and could use for example the delete_after context manager to ensure nodes are deleted after they have been extracted. This shouldn't be left for another plug-in down the line.

@BigRoy
Copy link
Collaborator Author

BigRoy commented Feb 16, 2022

For what it's worth here are more Save Scene logic across the hosts - most are at beginning of Extractor:

At ExtractorOrder:

At IntegratorOrder:

We might want to unify and even opt to do a Global plug-in across multiple hosts using the host's workio.save() to save the scene? Could be worth a separate issue?

@antirotor
Copy link
Member

That should be wrapped up in a context manager

True but that would work only inside of one plugin. You see, I need to duplicate some meshes, combine them, reparent - and then I want to use FBX extractor to extract it. Now I am thinking that the only reasonable way to do it is actually pull code out of the FBX extractor to library, do my stuff with context manager and reuse the code in the same plugin.

@antirotor
Copy link
Member

antirotor commented Feb 17, 2022

We might want to unify and even opt to do a Global plug-in across multiple hosts using the host's workio.save() to save the scene? Could be worth a separate issue?

That is interesting idea definitely worth exploring in separate issue. I am not sure now if we have it implemented across hosts but I think that it is so,

@BigRoy
Copy link
Collaborator Author

BigRoy commented Feb 17, 2022

True but that would work only inside of one plugin. You see, I need to duplicate some meshes, combine them, reparent - and then I want to use FBX extractor to extract it. Now I am thinking that the only reasonable way to do it is actually pull code out of the FBX extractor to library, do my stuff with context manager and reuse the code in the same plugin.

You could also use a Context manager stack where you only append your extra context manager that does that if the family is the specific unreal family, like contextlib.nested (or Py2: contextlib.ExitStack)

Example:

# pseudocode
stack = [a, b, c]
if family == "unreal":
    stack.append(d)

with contextlib.nested(*stack):
    export_fbx()

Anway, this PR should be ready to merge! :)

@antirotor
Copy link
Member

antirotor commented Feb 17, 2022

Yeah, and one more note possibly for another issue - we have Validate Current Save File that is global plugin and is somehow misleading about what it does. With those last words, I am merging it :)

@antirotor antirotor merged commit 37c40e2 into ynput:develop Feb 17, 2022
@BigRoy BigRoy deleted the maya_save_on_workfile branch March 20, 2024 15:19
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
host: Maya type: bug Something isn't working
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants