Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

generate reports for all supported architectures #629

Merged
merged 1 commit into from
Aug 5, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
160 changes: 98 additions & 62 deletions .github/workflows/check-type-sizes.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,87 +6,99 @@ on:
- master

jobs:
run:
build-windows:
name: Process sizes (win64, ${{ matrix.desc }})
uses: DFHack/dfhack/.github/workflows/build-windows.yml@develop
with:
dfhack_ref: develop
structures_ref: ${{ matrix.ref }}
artifact-name: sizes-Windows-64-${{ matrix.desc }}
platform-files: false
common-files: false
xml-dump-type-sizes: true
strategy:
fail-fast: false
matrix:
include:
- desc: old
ref: master
- desc: new
ref: ${{ github.ref }}
secrets: inherit

build-linux:
name: Process sizes (linux64, ${{ matrix.desc }})
uses: DFHack/dfhack/.github/workflows/build-linux.yml@develop
with:
dfhack_ref: develop
structures_ref: ${{ matrix.ref }}
artifact-name: sizes-Linux-64-${{ matrix.desc }}
platform-files: false
common-files: false
xml-dump-type-sizes: true
strategy:
fail-fast: false
matrix:
include:
- desc: old
ref: master
- desc: new
ref: ${{ github.ref }}
secrets: inherit

compare:
name: ${{ matrix.os }} (${{ matrix.arch }}-bit)
runs-on: ${{ matrix.os }}
needs:
- build-windows
- build-linux
runs-on: ${{ matrix.os }}-latest
strategy:
fail-fast: false
matrix:
os:
- ubuntu-22.04
- windows
- ubuntu
arch:
- 64
- 32
steps:
- name: Set up environment
run: |
echo "GHA_OS_TAG=$(echo '${{ runner.os }}' | tr '[[:upper:]]' '[[:lower:]]')${{ matrix.arch }}" >> $GITHUB_ENV
shell: bash
run: echo "GHA_OS_TAG=$(echo '${{ runner.os }}' | tr '[[:upper:]]' '[[:lower:]]')${{ matrix.arch }}" >> $GITHUB_ENV
- name: Install dependencies
run: |
sudo apt-get update
sudo apt-get install \
libxml-libxml-perl \
libxml-libxslt-perl \
ninja-build \
python3-jinja2
if [[ ${{ matrix.arch}} = 32 ]]; then
sudo apt-get install gcc-multilib g++-multilib
fi
- name: Clone DFHack
uses: actions/checkout@v3
with:
repository: dfhack/dfhack
submodules: true
- name: Clone df-structures (old)
run: pip install Jinja2
- name: Clone df-structures
uses: actions/checkout@v3
- name: Download type sizes artifact (old)
uses: actions/download-artifact@v3
with:
path: xml-old
ref: master
- name: Clone df-structures (new)
uses: actions/checkout@v3
name: sizes-${{ runner.os }}-${{ matrix.arch }}-old
path: old
- name: Download type sizes artifact (new)
uses: actions/download-artifact@v3
with:
path: xml-new
# ensure that DFHack configures properly on the new branch:
- name: Install new structures
run: rsync -av xml-new/ library/xml/ --exclude .git
- name: Configure DFHack
run: |
cmake \
-S . \
-B build-ci \
-G Ninja \
-D DFHACK_BUILD_ARCH=${{ matrix.arch }} \
-D BUILD_PLUGINS:BOOL=OFF \
-D BUILD_XMLDUMP:BOOL=ON \
-D CMAKE_INSTALL_PREFIX="$PWD/install-ci"
- name: Dump type sizes (old)
run: |
rsync -av xml-old/ library/xml/ --exclude .git
ninja -C build-ci clean
ninja -C build-ci xml-dump-type-sizes
./build-ci/library/xml/tools/xml-dump-type-sizes > sizes-old.txt
- name: Dump type sizes (new)
name: sizes-${{ runner.os }}-${{ matrix.arch }}-new
path: new
- name: Dump type sizes
shell: bash
run: |
rsync -av xml-new/ library/xml/ --exclude .git
ninja -C build-ci clean
ninja -C build-ci xml-dump-type-sizes
./build-ci/library/xml/tools/xml-dump-type-sizes > sizes-new.txt
chmod 755 old/xml-dump-type-sizes new/xml-dump-type-sizes
old/xml-dump-type-sizes > sizes-old.txt
new/xml-dump-type-sizes > sizes-new.txt
- name: Generate report
run: |
python3 library/xml/tools/compare-sizes.py --old sizes-old.txt --new sizes-new.txt --platform ${{ env.GHA_OS_TAG }} --output sizes-${{ env.GHA_OS_TAG }}.json
run: python3 tools/compare-sizes.py --old sizes-old.txt --new sizes-new.txt --platform ${{ env.GHA_OS_TAG }} --output sizes-${{ env.GHA_OS_TAG }}.json
- name: Upload report
uses: actions/upload-artifact@v3
with:
name: sizes-${{ env.GHA_OS_TAG }}
path: sizes-${{ env.GHA_OS_TAG }}.json

report:
needs: [run]
runs-on: ubuntu-22.04
needs: compare
runs-on: ubuntu-latest
steps:
- name: Clone df-structures
uses: actions/checkout@v3
- name: Download reports
- name: Download artifacts
uses: actions/download-artifact@v3
with:
path: reports
Expand All @@ -97,17 +109,41 @@ jobs:
--reports $(find reports -name '*.json') \
--template tools/type-size-comment.md.in \
--output tools/type-size-comment.md \
--github-actions
- name: Generate artifact
--github-actions >num_rows.txt
echo rows=$(cat num_rows.txt) >> $GITHUB_OUTPUT
- name: Generate comment data
run: |
jq --null-input \
--rawfile comment tools/type-size-comment.md \
--arg comment_search "<!--type-size-comment-->" \
--arg pr_number "${{ github.event.pull_request.number }}" \
--arg rows "${{ steps.generate_comment.outputs.rows }}" \
'{$comment, $comment_search, $pr_number, update_only:($rows|tonumber <= 0)}' > comment-info.json
- name: Upload artifact
uses: actions/upload-artifact@v3
- name: Format comment
id: format
run: |
set -ex
body="$(jq --raw-output .comment comment-info.json)"
body="${body//'%'/'%25'}"
body="${body//$'\n'/'%0A'}"
body="${body//$'\r'/'%0D'}"
echo "::set-output name=body::$body"

echo "COMMENT_SEARCH=$(jq --raw-output .comment_search comment-info.json)" >> $GITHUB_ENV
echo "COMMENT_PR_NUMBER=$(jq --raw-output .pr_number comment-info.json)" >> $GITHUB_ENV
echo "COMMENT_UPDATE_ONLY=$(jq --raw-output 'if .update_only then 1 else 0 end' comment-info.json)" >> $GITHUB_ENV
- name: Find existing comment
id: find_comment
uses: peter-evans/find-comment@v1
with:
issue-number: ${{ env.COMMENT_PR_NUMBER }}
comment-author: github-actions[bot]
body-includes: ${{ env.COMMENT_SEARCH }}
- name: Post comment
if: ${{ env.COMMENT_UPDATE_ONLY != 1 || steps.find_comment.outputs.comment-id != '' }}
uses: peter-evans/create-or-update-comment@v1
with:
name: comment-info.json
path: comment-info.json
issue-number: ${{ env.COMMENT_PR_NUMBER }}
comment-id: ${{ steps.find_comment.outputs.comment-id }}
body: ${{ steps.format.outputs.body }}
edit-mode: replace
48 changes: 0 additions & 48 deletions .github/workflows/comment-pr.yml

This file was deleted.

2 changes: 2 additions & 0 deletions tools/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -14,4 +14,6 @@ if(BUILD_XMLDUMP)
)
add_executable(xml-dump-type-sizes dump-type-sizes.cpp)
add_dependencies(xml-dump-type-sizes generate_headers)
install(TARGETS xml-dump-type-sizes
DESTINATION .)
Comment on lines +17 to +18
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we avoid install() here? This is installing the executable into the DF folder if the CMake option to build it is enabled, which isn't necessary. (It's not on by default, but I ran into it on my end.)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since the actual tests happen in a different runner from the build runner, we need to upload the built binaries and other files as an artifact. These are read from the install directory for all other invocations of the build, so it keeps things clean and consistent if we install this built binary too. We could add some custom workflow steps to manually install the file instead of having cmake do it. Alternately, we could install to a directory that's more out of the way, but still in the install tree.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it work to upload just the xml-dump-type-sizes executable? As far as I'm aware, that's all that the native test runner needs to run in this workflow. We don't need all of DFHack.

My concern is not just that xml-dump-type-sizes executable is at the top-level DF folder, although that is a somewhat more noticeable annoyance - it's also that the executable is getting installed in general. It's not a large amount of disk space, but it is disk space we don't need to consume, and I have a lot of DF folders.

Copy link
Member Author

@myk002 myk002 Aug 10, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To be clear, we do just build xml-dump-type-sizes when we run that build. This is the entire build log from https://github.com/DFHack/df-structures/actions/runs/5824369109/job/15793271743

Run ninja -C build install
ninja: Entering directory `build'
[1/5] Generating codegen.out.xml and df/headers
[2/5] Generating dump-type-sizes.cpp
[3/5] Building CXX object library/xml/tools/CMakeFiles/xml-dump-type-sizes.dir/dump-type-sizes.cpp.o
[[4](https://github.com/DFHack/df-structures/actions/runs/5824369109/job/15793271743#step:11:5)/[5](https://github.com/DFHack/df-structures/actions/runs/5824369109/job/15793271743#step:11:6)] Linking CXX executable library/xml/tools/xml-dump-type-sizes
[4/5] Install the project...
-- Install configuration: "Release"
-- Installing: /home/runner/work/df-structures/df-structures/build/image/./xml-dump-type-sizes
-- Set runtime path of "/home/runner/work/df-structures/df-structures/build/image/./xml-dump-type-sizes" to "hack"

The change to make it install the binary that it builds was done so this step would work without modification (all builds install to build/image):

      - name: Upload artifact
        if: inputs.artifact-name
        uses: actions/upload-artifact@v3
        with:
          name: ${{ inputs.append-date-and-hash && steps.artifactname.outputs.name || inputs.artifact-name }}
          path: build/image/*

we could work around this by changing the above to look like this for the build-linux and build-windows (and future build-macox) reusable workflows:

      - name: Upload artifact (common)
        if: inputs.artifact-name && !inputs.xml-dump-type-sizes
        uses: actions/upload-artifact@v3
        with:
          name: ${{ inputs.append-date-and-hash && steps.artifactname.outputs.name || inputs.artifact-name }}
          path: build/image/*
      - name: Upload artifact (xml-dump-type-sizes)
        if: inputs.artifact-name && inputs.xml-dump-type-sizes
        uses: actions/upload-artifact@v3
        with:
          name: ${{ inputs.append-date-and-hash && steps.artifactname.outputs.name || inputs.artifact-name }}
          path: build/library/xml/tools/CMakeFiles/xml-dump-type-sizes.dir/*

which adds complexity, but isn't horrible. Another way is to add a build option controlling whether to install the binary, which can be off by default but turned on by CI. That would mean adding a -DINSTALL_XMLDUMP:BOOL=1 to the configure steps in build-linux and build-windows -- it can always be on, so there would less added mess to the yaml, no additional input parameters or extra workflow steps.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, the reuse of the normal build+install workflow steps for the size-check pipeline does make things more complicated. I guess an INSTALL_XMLDUMP flag could work, but it just doesn't feel as clean to me.

I had the old workflow build and run xml-dump-type-sizes in a single workflow, rather than splitting up the build+run steps. I would really like to recombine them - the added time of packaging an artifact in one step and unpackaging it in the next isn't really necessary - but I'm not sure this is possible.

Copy link
Member Author

@myk002 myk002 Aug 16, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reusable workflows might make this particular workflow take more individual steps, but it also makes it possible to get combined type size reports for windows and linux (and has a pluggable structure where we can easily add mac as well if we want to), which I think is kind of important. Also, consolidating the build logic in a reusable workflow has given us the ability to run builds and tests in the structures and scripts repos without having to maintain N copies of the build logic. Can you really say that's not worth it?

In this case, the latency appears to almost entirely be due to the download of the docker image for the windows build of xml-dump-type-sizes (>1 minute). uploading/downloading of artifacts is on the order of a few seconds across the entire workflow (reference workflow). If we could slim down the docker download, we would see speedups across all our workflows.

Another thing we could do here is split the linux report from the windows report, allowing the linux one to appear much sooner (<1m) and give quicker initial feedback. The windows report would show up about 1m30s after the linux report. Of course, if we could get the cross compiling docker image size down so that it didn't take so long to download, splitting the reports would be unnecessary.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's not what I'm saying... I'm only saying that the upload/download steps are not strictly necessary, and complicated the logic for only installing what we need somewhat. My mistake on the timing - I am used to the upload step in particular being slow for large numbers of files, but that is not the case here.

Thanks for the followup changes!

endif()
2 changes: 1 addition & 1 deletion tools/generate-type-size-comment.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,4 +29,4 @@
f.write(template.render(rows=rows))

if args.github_actions:
print("::set-output name=rows::%s" % len(rows))
print("%s" % len(rows))
Loading