Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Verbose build takes 75x as long as regular build #4783

Closed
rgov opened this issue Nov 13, 2023 · 8 comments · Fixed by #4786
Closed

Verbose build takes 75x as long as regular build #4783

rgov opened this issue Nov 13, 2023 · 8 comments · Fixed by #4786

Comments

@rgov
Copy link
Contributor

rgov commented Nov 13, 2023

Configuration

Operating system: macOS 14.1.1 (arm64)

PlatformIO Version: PlatformIO Core, version 6.1.11

Description of problem

Building my project with --verbose significantly increases the time to build.

Steps to Reproduce

I have a very minimal project based on espidf:

platformio.ini
[env:test]
platform = espressif32 @ 5.4.0
framework = espidf

# Settings for ESP32-S3-WROOM-1U-N16
board = esp32-s3-devkitc-1
board_build.flash_mode = dio
board_build.partitions = partitions.csv
board_upload.flash_size = 16MB
board_upload.before_reset = no_reset
board_upload.after_reset = no_reset

Building regularly:

$ pio run -t clean && pio run
...
=== [SUCCESS] Took 22.97 seconds ===

Building verbosely:

$ pio run -t clean && pio run -v
...
=== [SUCCESS] Took 1722.01 seconds ===
@valeros
Copy link
Member

valeros commented Nov 14, 2023

Hi @rgov, that's somewhat expected. IDF has hundreds of source files with huge compile commands and every print to the terminal requires some additional processing time. As a workaround just redirect the output to a file pio run > log.txt

@rgov
Copy link
Contributor Author

rgov commented Nov 14, 2023

I am pretty certain there is a bug here. You can build any number of large programs with verbose output and it does not take 75x as long. The huge compile commands are already constructed in memory, printing them to the terminal takes milliseconds. If it took 5% longer I wouldn't even notice.

Even if verbose build disabled parallel compilation, I would expect like 8x as long to build not 75x as long.

@rgov
Copy link
Contributor Author

rgov commented Nov 14, 2023

The underlying SCons call on my system when I run pio run -v is:

/opt/homebrew/Cellar/platformio/6.1.11_2/libexec/bin/python \
    /Users/rzg/.platformio/packages/tool-scons/scons.py \
    -Q --warn=no-no-parallel-support --jobs 10 \
    --sconstruct /opt/homebrew/Cellar/platformio/6.1.11_2/libexec/lib/python3.12/site-packages/platformio/builder/main.py \
    PIOVERBOSE=1 ISATTY=1 PIOENV=dGVzdA== \
    PROJECT_CONFIG=L3ByaXZhdGUvdG1wL2JsaW5reS9wbGF0Zm9ybWlvLmluaQ== \
    PROGRAM_ARGS=W10= \
    BUILD_SCRIPT=L1VzZXJzL3J6Zy8ucGxhdGZvcm1pby9wbGF0Zm9ybXMvZXNwcmVzc2lmMzJANS40LjAvYnVpbGRlci9tYWluLnB5

This completes the build in about 20 seconds just like the non-verbose build.

Therefore I think the slowdown involves the buffering of output that platformio does.

@ivankravets
Copy link
Member

@rgov thanks for the report. We will fix this issue in PlatformIO Core 7.0 (leveraging asyncio for subprocessing).

@rgov
Copy link
Contributor Author

rgov commented Nov 16, 2023

@ivankravets See pull request #4786, it's a very simple fix.

@ivankravets
Copy link
Member

Did it help you?

@rgov
Copy link
Contributor Author

rgov commented Nov 16, 2023

Yes, I posted a benchmark showing the issue is resolved. The problem is a buffer that grows unbounded and is reallocated and copied over and over again. At about 2 MB the behavior of this operation in CPython seems to slow down tremendously.

@ivankravets ivankravets modified the milestones: 7.0, 6.1.12 Nov 16, 2023
ivankravets added a commit that referenced this issue Nov 16, 2023
@ivankravets
Copy link
Member

Thanks! Please re-test with the latest dev-branch (pio upgrade --dev).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants