Skip to content

Latest commit

 

History

History
120 lines (105 loc) · 4.75 KB

README_MACOS.md

File metadata and controls

120 lines (105 loc) · 4.75 KB

MACOS

Supports CPU and MPS (Metal M1/M2).

Install

  • Download and Install Miniconda for Python 3.10.
  • Run Miniconda
  • Setup environment with Conda Rust:
    conda create -n h2ogpt python=3.10 rust
    conda activate h2ogpt
  • Install dependencies:
    git clone https://github.com/h2oai/h2ogpt.git
    cd h2ogpt
    
    # fix any bad env
    pip uninstall -y pandoc pypandoc pypandoc-binary
    
    # CPU only:
    pip install -r requirements.txt --extra-index https://download.pytorch.org/whl/cpu
    
    # GPU only:
    pip install -r requirements.txt --extra-index https://download.pytorch.org/whl/cu117
  • Install document question-answer dependencies:
    # Required for Doc Q/A: LangChain:
    pip install -r reqs_optional/requirements_optional_langchain.txt
    # Required for CPU: LLaMa/GPT4All:
    pip install -r reqs_optional/requirements_optional_gpt4all.txt
    # Optional: PyMuPDF/ArXiv:
    pip install -r reqs_optional/requirements_optional_langchain.gpllike.txt
    # Optional: Selenium/PlayWright:
    pip install -r reqs_optional/requirements_optional_langchain.urls.txt
    # Optional: for supporting unstructured package
    python -m nltk.downloader all
  • For supporting Word and Excel documents, download libreoffice: https://www.libreoffice.org/download/download-libreoffice/ .
  • To support OCR, install Tesseract Documentation:
    brew install libmagic
    brew link libmagic
    brew install poppler
    brew install tesseract
    brew install tesseract-lang
  • Metal M1/M2 Only: Install newer Torch for GPU support:
    pip uninstall -y torch
    pip install --pre torch --extra-index-url https://download.pytorch.org/whl/nightly/cpu
    Verify whether torch uses MPS, run below python script:
     import torch
     if torch.backends.mps.is_available():
         mps_device = torch.device("mps")
         x = torch.ones(1, device=mps_device)
         print (x)
     else:
         print ("MPS device not found.")
    Output
    tensor([1.], device='mps:0')
  • Metal M1/M2 Only: Install and setup GPU-specific dependencies to support LLaMa.cpp on GPU:
    pip uninstall llama-cpp-python -y
    CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 pip install -U llama-cpp-python --no-cache-dir
    • Pass difference value of --model_path_llama if download a different GGML v3 model from TheBloke, or pass URL/path in UI. The default model can be downloaded here and placed in repo folder or give this URL.
    • Note Only supports v3 ggml 4 bit quantized models for MPS, so use llama models ends with ggmlv3 & q4_x.bin.

Run

  • To run LLaMa.cpp model in CPU or GPU mode:
    python generate.py --base_model='llama' --prompt_type=llama2 --score_model=None --langchain_mode='UserData' --user_path=user_path

Ignore CLI output showing 0.0.0.0, and instead go to http://localhost:7860 or the public live URL printed by the server (disable shared link with --share=False).

  • Full Hugging Face Model (recommended for M1/M2 only):

    python generate.py --base_model=h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-7b --score_model=None --langchain_mode='UserData' --user_path=user_path
  • CLI mode:

    python generate.py --base_model='llama' --prompt_type=llama2 --score_model=None --langchain_mode='UserData' --user_path=user_path --cli==True

See CPU and GPU for some other general aspects about using h2oGPT on CPU or GPU, such as which models to try.


Issues

  • If you see ld: library not found for -lSystem then ensure you do below and then retry from scratch to do pip install commands:

    export LDFLAGS=-L/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/lib`
  • If conda Rust has issus, you can download and install [Native Rust]((https://www.geeksforgeeks.org/how-to-install-rust-in-macos/):

    curl –proto ‘=https’ –tlsv1.2 -sSf https://sh.rustup.rs | sh
    # enter new shell and test:
    rustc --version
  • When running a Mac with Intel hardware (not M1), you may run into

    _clang: error: the clang compiler does not support '-march=native'_
    

    during pip install. If so, set your archflags during pip install. E.g.

    ARCHFLAGS="-arch x86_64" pip install -r requirements.txt
  • If you encounter an error while building a wheel during the pip install process, you may need to install a C++ compiler on your computer.