Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is there a plan to support Windows? #1640

Open
achalpandeyy opened this issue May 9, 2023 · 67 comments
Open

Is there a plan to support Windows? #1640

achalpandeyy opened this issue May 9, 2023 · 67 comments

Comments

@achalpandeyy
Copy link

I have noticed that the README states Linux as the only compatible platform. https://github.com/openai/triton#compatibility

Some people in the past have managed to compile on Windows #871 (there is even a really old PR for Windows support #24). But still going by the README, I suppose something changed and Triton doesn't support Windows anymore? I haven't tried to compile it myself yet.

I'm interested in the development of this repository but my main OS is Windows. I'm aware that I can probably use WSL2 but still I would prefer to run it on Windows natively. So my question is: is there a plan to officially support Windows? If so, I can help.

@OPPEYRADY
Copy link

I really want this as well! It would be so useful for so many things!

@Zodiac505
Copy link

I would also be really grateful if it happens.

@ptillet
Copy link
Collaborator

ptillet commented May 11, 2023

This is a pretty frequent request. Let me see what we can do about it.

@liuyunrui123
Copy link

+1

@patrickvonplaten
Copy link

With torch.compile heavily relying on Triton, lots of Hugging Face users are also interested in this it seems :-)

@hipsterusername
Copy link

We have a number of interested parties optimizing inference times for Invoke AI on Windows. We're currently evaluating alternatives, but as @patrickvonplaten noted above, torch.compile is the most straightforward but requires Triton.

@Li-Yanzhi
Copy link

+1

Get "RuntimeError: Windows not yet supported for torch.compile" in CUDA 12.1 & Pytorch 2.1.0, it seems that Triton is the main main reason which is not available on Windows, how we can get Windows version

@countzero
Copy link

+1

@Pythonpa
Copy link

Pythonpa commented Jun 4, 2023

+1,many python packages support windows,and hope this as well,

@domef
Copy link

domef commented Jun 6, 2023

+1

3 similar comments
@speedystream
Copy link

+1

@jyizheng
Copy link

+1

@Bigfield77
Copy link

+1

@patrickvonplaten
Copy link

@ptillet anything we could do to help you implement this? With PyTorch 2.+ becoming more and more dependent on Triton this feature request will only become more and more important I tihnk.

Can we help you here in any way?

@FurkanGozukara
Copy link

please add support to windows

i hate to see triton not available on windows message

@ptillet
Copy link
Collaborator

ptillet commented Jul 4, 2023

The way to help here is probably to just submit a PR that adds windows support :) we won't have CI for it though soon

@bartekleon
Copy link

The issues / solutions to them found so far: (also somewhat related to #1560)

  • fixing url issue ValueError: unknown url type: '' - it seems LLVM_SYSPATH is not set in system path and this. I added it but still doesn't work properly for me. Workaround for it was settings up the variable manually in setup.py -- os.environ['LLVM_SYSPATH'] = 'path/to/llvm_build

  • another issue i found was with the target / build type. I couldn't make MSYS / ninja generator working so I am just using my default - Visual Studio 17 2022. I had to force get_build_type function to return RelWithDebInfo

  • next issue I got is that MLIRGPUOps (and the other 2 files in Conversion) doesn't exist in build. As I am using llvm 17 [built from master] (version 17 is used on linux ) it seems it was renamed to MLIRGPUDialect

  • another issue I got is that I couldn't build with VS + clang (i got some error with -f flag), so had to stay with MSVC. I got error about /Werror value being incorrectly set. Had to change configuration to just set(CMAKE_CXX_FLAGS "/std:c++17")

  • Currently stuck because 'C:\Users\potato\Desktop\llvm-project\build\RelWithDebInfo\bin\mlir-tblgen.exe' is not recognized as an internal or external command, operable program or batch file. It seems there is some issue with it not being built

@gilberto-BE
Copy link

+1

1 similar comment
@DarkAlchy
Copy link

+1

@FurkanGozukara
Copy link

Were there any fork for this?

There is this repo but i don't know : https://github.com/PrashantSaikia/Triton-for-Windows

@skirdey
Copy link

skirdey commented Oct 1, 2023

+1

5 similar comments
@Pevernow
Copy link

Pevernow commented Oct 1, 2023

+1

@ezra-ch
Copy link

ezra-ch commented Oct 2, 2023

+1

@FurkanGozukara
Copy link

+1

@mush42
Copy link

mush42 commented Oct 5, 2023

+1

@DheerajMadda
Copy link

+1

@umarbutler
Copy link

+1

1 similar comment
@Yatagarasu50469
Copy link

+1

@tiRomani
Copy link

please 🥺

@dsent
Copy link

dsent commented May 21, 2024

+1 to this 🥺

@tin2tin
Copy link

tin2tin commented Jun 3, 2024

+1

@Adillwma
Copy link

Adillwma commented Jun 9, 2024

+1

@jetaudio270195
Copy link

+1

1 similar comment
@avielkis
Copy link

+1

@sipie800
Copy link

+10

@Systemcluster
Copy link

In #4045 Windows support was declined with "we don't have the bandwidth to commit to supporting Windows at this time".

@Bionic-Squash
Copy link

That's disappointing to hear

@skier233
Copy link

That's disappointing to hear

I can't imagine any possible feature that'd be more important than this such that "there isn't enough bandwidth".

@FurkanGozukara
Copy link

you have bandwidth money and manpower to support Windows

there are even individual developers published pre-compiled wheels for windows

this is ridiculous

@biship
Copy link

biship commented Jul 16, 2024

there are even individual developers published pre-compiled wheels for windows

Link one...

@FurkanGozukara
Copy link

there are even individual developers published pre-compiled wheels for windows

Link one...

https://github.com/wkpark/triton/actions/runs/7246431088

@biship
Copy link

biship commented Jul 16, 2024

7 months old. Yeah that's bound to be able to handle all the latest models.

@FurkanGozukara
Copy link

7 months old. Yeah that's bound to be able to handle all the latest models.

that is not the point though. point is OpenAI has all the resources on earth to make triton work on windows natively

seruva19 added a commit to seruva19/kubin-extensions that referenced this issue Aug 30, 2024
Interrogation with CogVLM2 does not work under W11 (triton is not supported on Windows triton-lang/triton#1640)
@darkanubis0100
Copy link

Where is Triton for Windows? :(

@FurkanGozukara
Copy link

Still we are missing Triton in 2024, in era of ChatGPT 4

@FurkanGozukara
Copy link

Thank you OpenAI taking 10s billions from Microsoft

image

@Obr00007576
Copy link

+1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.