-
Notifications
You must be signed in to change notification settings - Fork 81
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
generate reports for all supported architectures #629
Merged
Merged
Changes from all commits
Commits
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file was deleted.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we avoid
install()
here? This is installing the executable into the DF folder if the CMake option to build it is enabled, which isn't necessary. (It's not on by default, but I ran into it on my end.)There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since the actual tests happen in a different runner from the build runner, we need to upload the built binaries and other files as an artifact. These are read from the install directory for all other invocations of the build, so it keeps things clean and consistent if we install this built binary too. We could add some custom workflow steps to manually install the file instead of having cmake do it. Alternately, we could install to a directory that's more out of the way, but still in the install tree.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it work to upload just the xml-dump-type-sizes executable? As far as I'm aware, that's all that the native test runner needs to run in this workflow. We don't need all of DFHack.
My concern is not just that xml-dump-type-sizes executable is at the top-level DF folder, although that is a somewhat more noticeable annoyance - it's also that the executable is getting installed in general. It's not a large amount of disk space, but it is disk space we don't need to consume, and I have a lot of DF folders.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To be clear, we do just build
xml-dump-type-sizes
when we run that build. This is the entire build log from https://github.com/DFHack/df-structures/actions/runs/5824369109/job/15793271743The change to make it install the binary that it builds was done so this step would work without modification (all builds install to
build/image
):we could work around this by changing the above to look like this for the
build-linux
andbuild-windows
(and futurebuild-macox
) reusable workflows:which adds complexity, but isn't horrible. Another way is to add a build option controlling whether to install the binary, which can be off by default but turned on by CI. That would mean adding a
-DINSTALL_XMLDUMP:BOOL=1
to the configure steps inbuild-linux
andbuild-windows
-- it can always be on, so there would less added mess to the yaml, no additional input parameters or extra workflow steps.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, the reuse of the normal build+install workflow steps for the size-check pipeline does make things more complicated. I guess an INSTALL_XMLDUMP flag could work, but it just doesn't feel as clean to me.
I had the old workflow build and run xml-dump-type-sizes in a single workflow, rather than splitting up the build+run steps. I would really like to recombine them - the added time of packaging an artifact in one step and unpackaging it in the next isn't really necessary - but I'm not sure this is possible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reusable workflows might make this particular workflow take more individual steps, but it also makes it possible to get combined type size reports for windows and linux (and has a pluggable structure where we can easily add mac as well if we want to), which I think is kind of important. Also, consolidating the build logic in a reusable workflow has given us the ability to run builds and tests in the structures and scripts repos without having to maintain N copies of the build logic. Can you really say that's not worth it?
In this case, the latency appears to almost entirely be due to the download of the docker image for the windows build of
xml-dump-type-sizes
(>1 minute). uploading/downloading of artifacts is on the order of a few seconds across the entire workflow (reference workflow). If we could slim down the docker download, we would see speedups across all our workflows.Another thing we could do here is split the linux report from the windows report, allowing the linux one to appear much sooner (<1m) and give quicker initial feedback. The windows report would show up about 1m30s after the linux report. Of course, if we could get the cross compiling docker image size down so that it didn't take so long to download, splitting the reports would be unnecessary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's not what I'm saying... I'm only saying that the upload/download steps are not strictly necessary, and complicated the logic for only installing what we need somewhat. My mistake on the timing - I am used to the upload step in particular being slow for large numbers of files, but that is not the case here.
Thanks for the followup changes!