Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error in setup.py "No module named 'torch'" when installing with Poetry #156

Open
sisp opened this issue Aug 4, 2021 · 17 comments
Open
Labels
enhancement New feature or request

Comments

@sisp
Copy link

sisp commented Aug 4, 2021

When I try to install torch-sparse using Poetry, I'm getting the following error which occurs in setup.py:

ModuleNotFoundError: No module named 'torch'

The reason is that torch-sparse imports torch in setup.py while torch is not yet installed. Since those torch imports are only needed to build compiled extensions, it should be possible to avoid importing torch when installing the torch-sparse wheel package.

These are commands to reproduce the problem (tested using Poetry v1.1.7):

$ poetry init -n --python '^3.6.2' --dependency torch --dependency torch-sparse
$ poetry install
Creating virtualenv torch-sparse-poetry in /tmp/torch-sparse-poetry/.venv
Updating dependencies
Resolving dependencies... (77.1s)

Writing lock file

Package operations: 6 installs, 0 updates, 0 removals

  • Installing numpy (1.19.5)
  • Installing dataclasses (0.8)
  • Installing scipy (1.5.4)
  • Installing typing-extensions (3.10.0.0)
  • Installing torch (1.9.0)
  • Installing torch-sparse (0.6.11): Failed

  EnvCommandError

  Command ['/tmp/torch-sparse-poetry/.venv/bin/pip', 'install', '--no-deps', '$HOME/.cache/pypoetry/artifacts/59/cf/7b/23094d3d3aa79d571458529d8031882ce27d36db73083987acdab34868/torch_sparse-0.6.11.tar.gz'] errored with the following return code 1, and output: 
  Processing $HOME/.cache/pypoetry/artifacts/59/cf/7b/23094d3d3aa79d571458529d8031882ce27d36db73083987acdab34868/torch_sparse-0.6.11.tar.gz
      ERROR: Command errored out with exit status 1:
       command: /tmp/torch-sparse-poetry/.venv/bin/python -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-vk1oqsni/setup.py'"'"'; __file__='"'"'/tmp/pip-req-build-vk1oqsni/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-2tsa72d0
           cwd: /tmp/pip-req-build-vk1oqsni/
      Complete output (5 lines):
      Traceback (most recent call last):
        File "<string>", line 1, in <module>
        File "/tmp/pip-req-build-vk1oqsni/setup.py", line 8, in <module>
          import torch
      ModuleNotFoundError: No module named 'torch'
      ----------------------------------------
  WARNING: Discarding file://$HOME/.cache/pypoetry/artifacts/59/cf/7b/23094d3d3aa79d571458529d8031882ce27d36db73083987acdab34868/torch_sparse-0.6.11.tar.gz. Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
  ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
  WARNING: You are using pip version 21.1.3; however, version 21.2.2 is available.
  You should consider upgrading via the '/tmp/torch-sparse-poetry/.venv/bin/python -m pip install --upgrade pip' command.
  

  at ~/.local/share/pypoetry/venv/lib/python3.6/site-packages/poetry/utils/env.py:1101 in _run
      1097│                 output = subprocess.check_output(
      1098│                     cmd, stderr=subprocess.STDOUT, **kwargs
      1099│                 )
      1100│         except CalledProcessError as e:
    → 1101│             raise EnvCommandError(e, input=input_)
      1102│ 
      1103│         return decode(output)
      1104│ 
      1105│     def execute(self, bin, *args, **kwargs):
@rusty1s
Copy link
Owner

rusty1s commented Aug 5, 2021

This is a current limitation, indeed. The bad thing is that I do not think there exists a workaround for this. Any ideas?

@sisp
Copy link
Author

sisp commented Aug 5, 2021

I would need to verify some assumptions, but I believe it is possible to strictly separate build steps from runtime installation. Perhaps instead of passing torch's BuildExtension class to cmdclass, you could create a custom class and import from the torch package inside the run method (which I think is the one that is run when a command is executed). Something like:

from setuptools.command.install import install


class BuildExtensionCommand(install):
    def run(self):
        from torch.utils.cpp_extension import BuildExtension
        return BuildExtension.with_options(no_python_abi_suffix=True, use_ninja=False).run()


setup(
    # ...
    cmdclass={
        'build_ext': BuildExtensionCommand
    }
)

This snippet is completely unverified though, so just the sketch of an idea. I'm not sure yet how to create the ext_modules list without importing torch globally though.

@rusty1s
Copy link
Owner

rusty1s commented Aug 5, 2021

Thanks for digging into this. If you are interested, please feel free to contribute :)

RexYing pushed a commit to RexYing/pytorch_sparse that referenced this issue Apr 26, 2022
…dataset (rusty1s#156)

* Add a generic `kumo_loader` function that can points to snowflake/s3/local dataset

* Clean up code, switch test env from snowflake to s3 to save costs from CI

* lint

* change test data location to local
@Abhishaike
Copy link

I'm confused, if torch is a dependency for this library, why is it not included in setup.py as a dependency?

@rusty1s
Copy link
Owner

rusty1s commented Nov 6, 2022

We need to import torch in setup.py for compilation, so we cannot add it as a dependency. It needs to be installed in advance :(

@JacobHayes
Copy link

JacobHayes commented Nov 17, 2022

I think this should might be possible with a pyproject.toml [build-system] / requires section from PEP 517, I'll put up a PR!

While pyproject.toml can indeed define build-time deps, that's not sufficient to match the host's CUDA version.

@abrahme
Copy link

abrahme commented Feb 10, 2023

what is the consensus workaround here if there is one?

@JacobHayes
Copy link

We currently have an install script that installs torch and then these packages. After that, we run poetry install. Since the installed versions of torch* don't match what poetry has locked (poetry expects eg: X.X.X, but sees X.X.X+cu116 or whatever) and would try to reinstall them, we have some hacky code that renames the installed packages (in site-packages) to remove the +cuXYZ from the folder/metadata so it matches poetry's expectations.

TL;DR pretty hacky. 😅 I think others may just avoid placing torch* in their pyproject.toml (assuming they don't have any transitive deps with it)?

@abrahme
Copy link

abrahme commented Feb 11, 2023

We currently have an install script that installs torch and then these packages. After that, we run poetry install. Since the installed versions of torch* don't match what poetry has locked (poetry expects eg: X.X.X, but sees X.X.X+cu116 or whatever) and would try to reinstall them, we have some hacky code that renames the installed packages (in site-packages) to remove the +cuXYZ from the folder/metadata so it matches poetry's expectations.

TL;DR pretty hacky. 😅 I think others may just avoid placing torch* in their pyproject.toml (assuming they don't have any transitive deps with it)?

sorry, i'm pretty new to this and also my first time responding to a github issue so forgive me if this is the wrong way to go about it. however, i'm unsure what is meant by "avoiding placing torch* in their pyproject.toml" means in this context. is it that those torch dependencies are installed without the use of Poetry, and other non-torch dependencies go through poetry?

@JacobHayes
Copy link

@abrahme no worries, your response seems like the right way to go!

is it that those torch dependencies are installed without the use of Poetry, and other non-torch dependencies go through poetry?

Yeah, I'd guess that's what others do. ie: the [tool.poetry.dependencies] would contain most deps but omit torch, torch-sparse, etc. Then, install those separately before or after the poetry install.

Another approach I've seen people do is hard code the URLs to pull wheels/etc from. You can specify markers so the right file is used for each OS/CPU, but you would have to just hard code the cuda version (eg: cu116) in the URL and ensure it aligns with your host(s).

@abrahme
Copy link

abrahme commented Feb 22, 2023

@abrahme no worries, your response seems like the right way to go!

is it that those torch dependencies are installed without the use of Poetry, and other non-torch dependencies go through poetry?

Yeah, I'd guess that's what others do. ie: the [tool.poetry.dependencies] would contain most deps but omit torch, torch-sparse, etc. Then, install those separately before or after the poetry install.

Another approach I've seen people do is hard code the URLs to pull wheels/etc from. You can specify markers so the right file is used for each OS/CPU, but you would have to just hard code the cuda version (eg: cu116) in the URL and ensure it aligns with your host(s).

Thanks for the clarification. I ended up just using pyenv instead and abandoning poetry

@hemmokarja
Copy link

hemmokarja commented Mar 4, 2023

I got around the issue in the following way:

(1) Configure in the .toml file the source URL where the wheels are pulled from:

poetry source add torch-wheels https://data.pyg.org/whl/torch-1.12.0+cpu.html

Check that the parts torch-1.12.0 and cpu in the url match your torch and cuda/cpu versions. Please refer here for different options.

Running the command should create the following kind of section to your .toml file.

[[tool.poetry.source]]
name = "torch-wheels"
url = "https://data.pyg.org/whl/torch-1.12.0+cpu.html"
default = false
secondary = false

(2) Add and install the torch_sparse package (along with other necessary packages) using the source configured in the .toml file:

poetry add --source torch-wheels pyg_lib torch_scatter torch_sparse torch_cluster torch_spline_conv

Now you should see this in the .toml file

[tool.poetry.dependencies]
...
pyg-lib = {version = "^0.1.0+pt112cpu", source = "torch-wheels"}
torch-scatter = {version = "^2.1.0+pt112cpu", source = "torch-wheels"}
torch-sparse = {version = "^0.6.16+pt112cpu", source = "torch-wheels"}
torch-cluster = {version = "^1.6.0+pt112cpu", source = "torch-wheels"}
torch-spline-conv = {version = "^1.2.1+pt112cpu", source = "torch-wheels"}

...and everything should work fine.

Clearly this is not a scalable solution if the repo is used by several different people with different cpu/cuda setups, but works as a temporary workaround.

@alejandroarmas
Copy link

What I did was build the whls from source. Please point out any issues with this approach

[tool.poetry.group.dev.dependencies]
poethepoet = "^0.18.1"


[tool.poe.tasks]
install-torch-cluster = "pip install git+https://github.com/rusty1s/pytorch_cluster.git"
install-torch-sparse = "pip install git+https://github.com/rusty1s/pytorch_sparse.git"
install-torch-scatter = "pip install git+https://github.com/rusty1s/pytorch_scatter.git"
install-torch-spline-conv = "pip install git+https://github.com/rusty1s/pytorch_spline_conv.git"

WenjieDu added a commit to WenjieDu/PyPOTS that referenced this issue Apr 11, 2023
@raphael-assal
Copy link

On Mac, with @hemmokarja solution to setup a secondary source failed

[[tool.poetry.source]]
name = "torch-wheels"
url = "https://data.pyg.org/whl/torch-1.12.0+cpu.html"
default = false
secondary = false

with poetry add pyg-lib --source torch-wheels -v raising Unable to find installation candidates for pyg-lib (0.3.1+pt20cpu)

After hours of frustration, I realised by checking in the source list, that poetry was trying to fetch non-macos versions (ending with "cpu": 0.3.1+pt21cpu)

Simply running poetry add pyg-lib="0.3.0+pt21" --source torch-wheels -v worked.
I'm sure it generalizes to torch_{sparce, scatter, ...} too.
Hope it helps !

@aisven
Copy link

aisven commented Jul 1, 2024

@raphael-assal Thank you for your macOS specific remark. I know it might be asking much, but could you provide a full recipe to get it running as of today. I think poetry has changed a bit and the above content might not be up to date. At least when I try these commands, I am running into trouble. (Note also that I have an mac with Intel-CPU, so I cannot use PyTorch higher than 2.2.x.)

@atemate
Copy link

atemate commented Sep 24, 2024

If Python environment (system Python installation or in a docker image) does have torch installed and Poetry still cannot access it when executing setup.py, try to:

export VIRTUALENV_SYSTEM_SITE_PACKAGES=true

and then run poetry install/lock.... This will instruct the virtualenv package, which is used internally by poetry, to pass --system-site-packages=true option to all virtual environments created:

  • source code of poetry creating the virtual environment
  • source code of virtualenv setting defaults from env vars in format VIRTUALENV_ARGUMENT_NAME

@atemate
Copy link

atemate commented Sep 24, 2024

TL;DR try export VIRTUALENV_SYSTEM_SITE_PACKAGES=true

See issue python-poetry/poetry#9707

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

10 participants