Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ci] remove Travis (fixes #3519) #3672

Merged
merged 65 commits into from
Jan 13, 2021
Merged

[ci] remove Travis (fixes #3519) #3672

merged 65 commits into from
Jan 13, 2021

Conversation

jameslamb
Copy link
Collaborator

@jameslamb jameslamb commented Dec 23, 2020

This is a draft PR to move CI jobs from Travis to Azure DevOps.

This PR moves remaining Mac + Linux jobs that are currently running on Travis to GitHub Actions. This project is ending its reliance on Travis based on Travis's strategic decision to offer only very very limited support for open source projects. See #3519 for full background and discussion.

@jameslamb
Copy link
Collaborator Author

@guolinke could you add docker to the image in the new sh-ubuntu pool?

If we want to continue running Linux CI jobs in this container (https://github.com/guolinke/lightgbm-ci-docker), we have to be able to run a docker daemon on VMs in that pool.

https://dev.azure.com/lightgbm-ci/lightgbm-ci/_build/results?buildId=8274&view=logs&j=c28dceab-947a-5848-c21f-eef3695e5f11&t=53b9ead0-ad82-4deb-a3ac-f05a6192bec9

image

@StrikerRUS
Copy link
Collaborator

@jameslamb I'd like to preserse the stategy we used with Travis for duplicated Python tests: oldest possible OS + default compiler for Azure and newest available OS + non-default compiler for Travis. WDYT about this? Can we trasfer "newest available OS + non-default compiler for Travis" to new Azure pools?

@jameslamb
Copy link
Collaborator Author

jameslamb commented Dec 23, 2020

@jameslamb I'd like to preserse the stategy we used with Travis for duplicated Python tests: oldest possible OS + default compiler for Azure and newest available OS + non-default compiler for Travis. WDYT about this? Can we trasfer "newest available OS + non-default compiler for Travis" to new Azure pools?

One hard thing here is that I think (based on https://docs.microsoft.com/en-us/azure/devops/pipelines/yaml-schema?view=azure-devops&tabs=schema%2Cparameter-schema#pool) that the VM image used for these new user-managed pools will be frozen, and can't be something dynamic like vmImage: ubuntu-latest. So to have it be the "newest available" OS, @guolinke would have to manually update it from time to time.

For linux, it's fairly simple to switch OS using Docker. So I think it could work like this:

For Mac, we don't have a group of self-hosted runners (#3519 (comment)), so will have to make it work with Microsoft-hosted ones

  • mac "oldest OS": runs on a pool of Microsoft-hosted runners, using vmImage: macos-10.14
  • mac "newest OS": runs on a different pool of Microsoft-hosted runners, using vmImage: macos-latest

Using this strategy, nothing would have to be manually changed in the self-hosted runners.

I think this can work, especially remembering that there are fewer Mac jobs than Linux Jobs, [sdist, bdist, regular] * 2 + [mpi source, mpi pip, mpi wheel] = 9 total.

@guolinke
Copy link
Collaborator

@jameslamb It use the dynamically created VM, so we cannot pre-install docker.
refer to https://stackoverflow.com/questions/63940306/azure-devops-pipelines-scale-set-agents-installing-docker

@guolinke
Copy link
Collaborator

@jameslamb
Copy link
Collaborator Author

Oh ok, I'm surprised that it isn't possible to create a custom image with any software you want, like you can do with AWS AMIs (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html#creating-an-ami).

Based on your comments above, I can try customizing the init. Thanks!

@StrikerRUS
Copy link
Collaborator

@jameslamb

Using this strategy, nothing would have to be manually changed in the self-hosted runners.

Sounds great!

It use the dynamically created VM, so we cannot pre-install docker.

It seems the only solution is to use cloud-init when create vm/vmss

@jameslamb @guolinke
WDYT if we just increase number of jobs at Azure for migrated from Travis tasks but continue using free runners from Microsoft? Doing this way we will continue using advantages of dynamic newest VMs and will not increase maintenance (especially manual) burden. Of course, it will increase CI time, but I think we can admit it. BTW, this is the only way we can work with macOS runners.

@jameslamb
Copy link
Collaborator Author

WDYT if we just increase number of jobs at Azure for migrated from Travis tasks but continue using free runners from Microsoft?

I'll try this first, let's see how much longer the CI is. I agree with your earlier comments that at our pace of development, extra CI time isn't a huge problem.

@guolinke
Copy link
Collaborator

the sh-ubuntu seems to work now. you can have a try.
Actually, we don't need to update them manually. When it creating a new VM, it will always install the latest docker and agents.

@jameslamb
Copy link
Collaborator Author

@guolinke can you give me permissions in Azure DevOps to cancel / re-try builds for the lightgbm-ci pipeline? I think it would help me to go faster on this.

@jameslamb
Copy link
Collaborator Author

Ok @StrikerRUS can you take a look? I'd like to hear your suggestions.

I currently have the following setup on Azure (Windows excluded because it's unchanged by this PR).

  • Linux (ubuntu-latest, microsoft-hosted runners)
    • check-docs
    • lint
  • Linux (ubuntu-14.04 container, self-hosted runners)
    • regular (gcc)
    • sdist (gcc)
    • bdist (gcc, python 3.7)
    • if-else (gcc)
    • mpi source (gcc)
    • gpu source (gcc)
  • Linux (ubuntu-latest, microsoft-hosted runners)
    • regular (clang)
    • sdist (clang)
    • bdist (clang, python 3.7)
    • if-else (clang)
    • mpi source (clang)
    • gpu source (clang)
    • gpu pip (clang, python 3.6)
    • gpu wheel (clang, Python 3.7)
  • Mac (macos-10.14, microsoft-hosted runners)
    • regular (clang, python 3.7)
    • sdist (clang)
    • bdist (clang)
  • Mac (macos-latest, microsoft-hosted runners)
    • regular (gcc, python 3.7)
    • sdist (gcc)
    • bdist (gcc)
    • if-else (gcc)
    • mpi source (gcc)
    • mpi pip (gcc, python 3.7)
    • mpi wheel (gcc, python 3.7)

Most jobs seems to be passing, but I think this setup will be very slow.

  1. We might be facing either account limits or capacity problems on Azure. I started https://dev.azure.com/lightgbm-ci/lightgbm-ci/_build/results?buildId=8308&view=results 90 minutes ago and it still had 4 mac jobs in state "Queued".
  2. Maybe the self-hosted runners in the sh-ubuntu self-hosted pool are smaller (in terms of memory and CPU) than the microsoft-hosted runners? Because I observed that they spend a long time pulling container images, longer than the microsoft-hosted ones.

@jameslamb
Copy link
Collaborator Author

I don't think the capacity / limits problems are specific to this PR's changes.

I started https://dev.azure.com/lightgbm-ci/lightgbm-ci/_build/results?buildId=8312&view=results (for #3688) 15 minutes ago and most builds have not started yet

image

@guolinke
Copy link
Collaborator

Hi @jameslamb
image

Can you give me a microsoft account, so that I can add you?

The self-hosted agents are created dynamically, so it may need more time to initialize.
By default, it have 2 cpu cores per agent, I can increase it if need.

@StrikerRUS
Copy link
Collaborator

@jameslamb

can you take a look?

I like the list you've provided. 👍

Most jobs seems to be passing, but I think this setup will be very slow.

I still think that coverage is more important than testing time.

We might be facing either account limits or capacity problems on Azure.

If I'm not mistaken, we are limited by 10 free parallel jobs. Maybe any other PR was being built in the same time as yours.

Could you please remove all occurrences of the maxParallel param to give Azure full control under the jobs' queue?

Maybe the self-hosted runners in the sh-ubuntu self-hosted pool are smaller (in terms of memory and CPU) than the microsoft-hosted runners?

Hosted agents run on Microsoft Azure general purpose VMs. Standard_DS2_v2 describes the CPU, memory, and network characteristics you can expect.
https://docs.microsoft.com/en-us/learn/modules/host-build-agent/2-choose-a-build-agent

@StrikerRUS
Copy link
Collaborator

Seems we need OpenMP libomp-dev for Clang.

@jameslamb
Copy link
Collaborator Author

Seems we need OpenMP libomp-dev for Clang.

yep you're right! didn't catch it before because I was using gcc accidentally. I added that in 21a98ca and seems like it worked.

One last test is failing, and I'm unsure what to do. Linux gpu_source (https://dev.azure.com/lightgbm-ci/lightgbm-ci/_build/results?buildId=8510&view=logs&j=9ce42e5d-f31a-544a-a6de-dc42f8c013ed&t=b696def5-78ae-590c-4234-04332342ba2d) is failing with a segfault

============================= test session starts ==============================
platform linux -- Python 3.6.12, pytest-6.2.1, py-1.10.0, pluggy-0.13.1
rootdir: /__w/1/s
collected 238 items

../tests/c_api_test/test_.py .Fatal Python error: Segmentation fault


  File "/opt/conda/envs/test-env/lib/python3.6/site-packages/_pytest/python.py", line 183 in pytest_pyfunc_call
  File "/opt/conda/envs/test-env/lib/python3.6/site-packages/pluggy/callers.py", line 187 in _multicall
  File "/opt/conda/envs/test-env/lib/python3.6/site-packages/pluggy/manager.py", line 87 in <lambda>
  File "/opt/conda/envs/test-env/lib/python3.6/site-packages/pluggy/manager.py", line 93 in _hookexec
  File "/opt/conda/envs/test-env/lib/python3.6/site-packages/pluggy/hooks.py", line 286 in __call__
  File "/opt/conda/envs/test-env/lib/python3.6/site-packages/_pytest/python.py", line 1641 in runtest
  File "/opt/conda/envs/test-env/lib/python3.6/site-packages/_pytest/runner.py", line 162 in pytest_runtest_call
  File "/opt/conda/envs/test-env/lib/python3.6/site-packages/pluggy/callers.py", line 187 in _multicall
  File "/opt/conda/envs/test-env/lib/python3.6/site-packages/pluggy/manager.py", line 87 in <lambda>
  File "/opt/conda/envs/test-env/lib/python3.6/site-packages/pluggy/manager.py", line 93 in _hookexec
  File "/opt/conda/envs/test-env/lib/python3.6/site-packages/pluggy/hooks.py", line 286 in __call__
  File "/opt/conda/envs/test-env/lib/python3.6/site-packages/_pytest/runner.py", line 255 in <lambda>
  File "/opt/conda/envs/test-env/lib/python3.6/site-packages/_pytest/runner.py", line 311 in from_call
  File "/opt/conda/envs/test-env/lib/python3.6/site-packages/_pytest/runner.py", line 255 in call_runtest_hook
  File "/opt/conda/envs/test-env/lib/python3.6/site-packages/_pytest/runner.py", line 215 in call_and_report
  File "/opt/conda/envs/test-env/lib/python3.6/site-packages/_pytest/runner.py", line 126 in runtestprotocol
  File "/opt/conda/envs/test-env/lib/python3.6/site-packages/_pytest/runner.py", line 109 in pytest_runtest_protocol
  File "/opt/conda/envs/test-env/lib/python3.6/site-packages/pluggy/callers.py", line 187 in _multicall
  File "/opt/conda/envs/test-env/lib/python3.6/site-packages/pluggy/manager.py", line 87 in <lambda>
  File "/opt/conda/envs/test-env/lib/python3.6/site-packages/pluggy/manager.py", line 93 in _hookexec
  File "/opt/conda/envs/test-env/lib/python3.6/site-packages/pluggy/hooks.py", line 286 in __call__
  File "/opt/conda/envs/test-env/lib/python3.6/site-packages/_pytest/main.py", line 348 in pytest_runtestloop
  File "/opt/conda/envs/test-env/lib/python3.6/site-packages/pluggy/callers.py", line 187 in _multicall
  File "/opt/conda/envs/test-env/lib/python3.6/site-packages/pluggy/manager.py", line 87 in <lambda>
  File "/opt/conda/envs/test-env/lib/python3.6/site-packages/pluggy/manager.py", line 93 in _hookexec
  File "/opt/conda/envs/test-env/lib/python3.6/site-packages/pluggy/hooks.py", line 286 in __call__
  File "/opt/conda/envs/test-env/lib/python3.6/site-packages/_pytest/main.py", line 323 in _main
  File "/opt/conda/envs/test-env/lib/python3.6/site-packages/_pytest/main.py", line 269 in wrap_session
  File "/opt/conda/envs/test-env/lib/python3.6/site-packages/_pytest/main.py", line 316 in pytest_cmdline_main
  File "/opt/conda/envs/test-env/lib/python3.6/site-packages/pluggy/callers.py", line 187 in _multicall
  File "/opt/conda/envs/test-env/lib/python3.6/site-packages/pluggy/manager.py", line 87 in <lambda>
  File "/opt/conda/envs/test-env/lib/python3.6/site-packages/pluggy/manager.py", line 93 in _hookexec
  File "/opt/conda/envs/test-env/lib/python3.6/site-packages/pluggy/hooks.py", line 286 in __call__
  File "/opt/conda/envs/test-env/lib/python3.6/site-packages/_pytest/config/__init__.py", line 163 in main
  File "/opt/conda/envs/test-env/lib/python3.6/site-packages/_pytest/config/__init__.py", line 185 in console_main
  File "/opt/conda/envs/test-env/bin/pytest", line 11 in <module>
/__w/1/s/.ci/test.sh: line 180:  1972 Segmentation fault      (core dumped) pytest $BUILD_DIRECTORY/tests

This is extra weird because I didn't touch that job in this PR, and it's been passing on other PRs and on master. For example, here's the most recent build on master: https://dev.azure.com/lightgbm-ci/lightgbm-ci/_build/results?buildId=8503&view=logs&j=9ce42e5d-f31a-544a-a6de-dc42f8c013ed&t=b696def5-78ae-590c-4234-04332342ba2d.

@StrikerRUS have you seen this before or have any ideas what I can try?

@StrikerRUS
Copy link
Collaborator

@jameslamb So weird! I expected failing new GPU job with Clang (refer to #3475), but not the old one with gcc! But the symptoms look very similar to the linked issue...

@jameslamb
Copy link
Collaborator Author

@jameslamb So weird! I expected failing new GPU job with Clang (refer to #3475), but not the old one with gcc! But the symptoms look very similar to the linked issue...

yeah I'm pretty confused. The thing that's weirding me out the most is that this isn't even one of the jobs that is being moved over from Travis. It shouldn't be affected by this PR at all. I'll double-check my changes in setup.sh and test.sh, maybe I missed an if statement somewhere.

I do see in the logs that the failing job is using gcc as expected 😕

-- The C compiler identification is GNU 4.8.4
-- The CXX compiler identification is GNU 4.8.4
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped

@StrikerRUS
Copy link
Collaborator

StrikerRUS commented Jan 12, 2021

Does running tests on self-hosted pool of agents mean that we actually use the same machine each time? I'm afraid that environment changes done to run Docker with latest Ubuntu caused some conflicts.

@jameslamb
Copy link
Collaborator Author

jameslamb commented Jan 12, 2021

Does running tests on self-hosted pool of agents mean that we actually use the same machine each time? I'm afraid that environment changes done to run Docker with latest Ubuntu caused some conflicts.

😫 I think that's possible. Maybe when you use containers, Azure DevOps schedules multiple jobs into the same VM, assuming they're isolated?

I just looked in the setup logs, and I see several volumes being mounted in, which is one way that information could leak between jobs.

See the "initialize containers" step in https://dev.azure.com/lightgbm-ci/lightgbm-ci/_build/results?buildId=8510&view=logs&j=b463fb3a-b487-5cfc-fc07-6d216464ba86&t=c3cec64d-f953-425d-a2c4-95244904eddb.

/usr/bin/docker create \
    --name ubuntu-latest_ubuntulatest_dd8aa1 \
    --label b0e97f \
    --network vsts_network_571a4ba0d03d4c4cbd4e55cf6ef595c3 \
    --name ci-container \
    -v /usr/bin/docker:/tmp/docker:ro \
    -v "/var/run/docker.sock":"/var/run/docker.sock" \
    -v "/agent/_work/1":"/__w/1" \
    -v "/agent/_work/_temp":"/__w/_temp" \
    -v "/agent/_work/_tasks":"/__w/_tasks" \
    -v "/agent/_work/_tool":"/__t" \
    -v "/agent/externals":"/__a/externals":ro \
    -v "/agent/_work/.taskkey":"/__w/.taskkey" \
    ubuntu:latest \
    "/__a/externals/node/bin/node" -e "setInterval(function(){}, 24 * 60 * 60 * 1000);"

If something like CMake's cache was getting written to one of those directories, details from one job could sneak into another.

@StrikerRUS
Copy link
Collaborator

StrikerRUS commented Jan 12, 2021

I think that's possible.

Oh no, this is bad!

However, I was able to run the whole bunch of Azure jobs successfully with the following changes forking your branch (of course ignore changes for build triggers and typo for MPI job 😄 ):

image

@StrikerRUS
Copy link
Collaborator

Oh, Azure is so lacking ability to cancel jobs! 😢

I tried to skip tests and run our examples.
Just removed this line

pytest $BUILD_DIRECTORY/tests || exit -1

and changed regular to gpu in this piece of code:

LightGBM/.ci/test.sh

Lines 174 to 196 in 78d31d9

if [[ $TASK == "regular" ]]; then
if [[ $AZURE == "true" ]]; then
if [[ $OS_NAME == "macos" ]]; then
cp $BUILD_DIRECTORY/lib_lightgbm.so $BUILD_ARTIFACTSTAGINGDIRECTORY/lib_lightgbm.dylib
else
if [[ $COMPILER == "gcc" ]]; then
objdump -T $BUILD_DIRECTORY/lib_lightgbm.so > $BUILD_DIRECTORY/objdump.log || exit -1
python $BUILD_DIRECTORY/helpers/check_dynamic_dependencies.py $BUILD_DIRECTORY/objdump.log || exit -1
fi
cp $BUILD_DIRECTORY/lib_lightgbm.so $BUILD_ARTIFACTSTAGINGDIRECTORY/lib_lightgbm.so
fi
fi
cd $BUILD_DIRECTORY/examples/python-guide
sed -i'.bak' '/import lightgbm as lgb/a\
import matplotlib\
matplotlib.use\(\"Agg\"\)\
' plot_example.py # prevent interactive window mode
sed -i'.bak' 's/graph.render(view=True)/graph.render(view=False)/' plot_example.py
for f in *.py; do python $f || exit -1; done # run all examples
cd $BUILD_DIRECTORY/examples/python-guide/notebooks
conda install -q -y -n $CONDA_ENV ipywidgets notebook
jupyter nbconvert --ExecutePreprocessor.timeout=180 --to notebook --execute --inplace *.ipynb || exit -1 # run all notebooks
fi

Here is the output:

LightGBMError                             Traceback (most recent call last)
<ipython-input-1-fb9e9ad1250c> in <module>
      7                 categorical_feature=[21],
      8                 evals_result=evals_result,
----> 9                 verbose_eval=10)

~/.local/lib/python3.6/site-packages/lightgbm/engine.py in train(params, train_set, num_boost_round, valid_sets, valid_names, fobj, feval, init_model, feature_name, categorical_feature, early_stopping_rounds, evals_result, verbose_eval, learning_rates, keep_training_booster, callbacks)
    226     # construct booster
    227     try:
--> 228         booster = Booster(params=params, train_set=train_set)
    229         if is_valid_contain_train:
    230             booster.set_train_data_name(train_data_name)

~/.local/lib/python3.6/site-packages/lightgbm/basic.py in __init__(self, params, train_set, model_file, model_str, silent)
   2074                 train_set.handle,
   2075                 c_str(params_str),
-> 2076                 ctypes.byref(self.handle)))
   2077             # save reference to data
   2078             self.train_set = train_set

~/.local/lib/python3.6/site-packages/lightgbm/basic.py in _safe_call(ret)
     50     """
     51     if ret != 0:
---> 52         raise LightGBMError(_LIB.LGBM_GetLastError().decode('utf-8'))
     53 
     54 

LightGBMError: No OpenCL device found
LightGBMError: No OpenCL device found

Full log: https://dev.azure.com/lightgbm-ci/8461a79b-5dce-4085-ad70-4410b7135276/_apis/build/builds/8516/logs/9

So weird!

.ci/setup.sh Outdated Show resolved Hide resolved
@StrikerRUS
Copy link
Collaborator

BTW, don't you know why Initialize containers step takes so much time?

It is not something exclusive to image: 'ubuntu:latest' but is general issue for sh-ubuntu pool. Screenshot from master:

image

@jameslamb
Copy link
Collaborator Author

BTW, don't you know why Initialize containers step takes so much time?

It is not something exclusive to image: 'ubuntu:latest' but is general issue for sh-ubuntu pool. Screenshot from master:

image

my first guess is that the runners in that pool are smaller (less available memory / cpu / bandwidth) that the microsoft-hosted ones. I'm not sure how to check that though.

@jameslamb
Copy link
Collaborator Author

I think that's possible.

Oh no, this is bad!

However, I was able to run the whole bunch of Azure jobs successfully with the following changes forking your branch (of course ignore changes for build triggers and typo for MPI job 😄 ):

image

oh awesome! I just made this change in 3a3c845

@jameslamb jameslamb requested a review from StrikerRUS January 13, 2021 04:49
@StrikerRUS
Copy link
Collaborator

my first guess is that the runners in that pool are smaller ...

Then it is strange that only one step related to Docker is suffering...

Also, Microsoft-hosted agents don't look very powerful

Hosted agents run on Microsoft Azure general purpose VMs. Standard_DS2_v2 describes the CPU, memory, and network characteristics you can expect.
https://docs.microsoft.com/en-us/learn/modules/host-build-agent/2-choose-a-build-agent

Maybe again some downloading issues like #3682?..

Copy link
Collaborator

@StrikerRUS StrikerRUS left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Excellent PR!

@StrikerRUS
Copy link
Collaborator

By comparing logs before and after migration to self-hosted agents, I can see only two possible causes of 2x longer runs of Initialize containers step:

  • Docker daemon API version: '1.40' vs Docker daemon API version: '1.41'
  •  ubuntu-14.04: Pulling from lightgbm/vsts-agent
     2e6e20c8e2e6: Already exists
     95201152d9ff: Already exists
     5f63a3b65493: Already exists
     7384bd57574f: Pulling fs layer
    
    vs
    ubuntu-14.04: Pulling from lightgbm/vsts-agent
    2e6e20c8e2e6: Pulling fs layer
    95201152d9ff: Pulling fs layer
    5f63a3b65493: Pulling fs layer
    7384bd57574f: Pulling fs layer
    

https://editor.mergely.com/c7QUKpfd/

@jameslamb
Copy link
Collaborator Author

Already exists
5f63a3b65493: Already exists
7384bd57574f: Pulling fs layer

I'm surprised by this, because the Azure docs say that Docker layer caching isn't available for Microsoft-hosted agents: https://docs.microsoft.com/en-us/azure/devops/pipelines/ecosystems/containers/build-image?view=azure-devops#is-reutilizing-layer-caching-during-builds-possible-on-azure-pipelines.

And I expected that having a dedicated pool of agents would probably mean we'd get layer caching for free.

I'm really not sure :/

@jameslamb
Copy link
Collaborator Author

Thanks for the reviews @StrikerRUS and for setting up these self-hosted runners @guolinke . We have more things to tinker with in LightGBM's CI, but overall I think this should improve the stability.

@jameslamb jameslamb merged commit 318f7fa into master Jan 13, 2021
@jameslamb jameslamb deleted the ci/move-to-azure branch January 13, 2021 15:56
@StrikerRUS
Copy link
Collaborator

Just for the record.

BTW, don't you know why Initialize containers step takes so much time?

Almost the same issue here. But OP there didn't change anything in config.

https://developercommunity.visualstudio.com/content/problem/1211173/initialize-containers-is-very-slow.html

Maybe different geo regions?..

Lets hope this slowness will be resolved without our help.

@jameslamb
Copy link
Collaborator Author

Thanks, I hope it's just something like that! Very frustrating that support just closed that ticket as "Closed - not a bug" 😭

@StrikerRUS
Copy link
Collaborator

There was some discussion (click to small icon of dialog with number 13 under the post).
image

They said that cannot reproduce the issue 🙁

Thanks for your reply.
According to the current information, I cannot make sure the root reason temporarily. I cannot reproduce the problem. It may be caused by some specific machines' performance. If it occurs again, please feel free to post it here, we will continue to investigate it further.
Thanks for your understanding.

@jameslamb
Copy link
Collaborator Author

oh I see, ok

@StrikerRUS
Copy link
Collaborator

Related to slow Docker pulls: https://david.gardiner.net.au/2020/05/docker-perf-on-azure-pipelines.html.

@github-actions
Copy link

This pull request has been automatically locked since there has not been any recent activity since it was closed. To start a new related discussion, open a new issue at https://github.com/microsoft/LightGBM/issues including a reference to this.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Aug 24, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants