Skip to content

Commit

Permalink
Add example of LLava encoder (#2375)
Browse files Browse the repository at this point in the history
Summary:
Pull Request resolved: #2375

Add the example to start enabling LLava, one multimodal model in generative AI area. In this example, we initiate the process of running LLava through ExecuTorch. Refer to the added README.md for details.

bypass-github-export-checks

Reviewed By: cccclai

Differential Revision: D54812717

fbshipit-source-id: 57f79a925f40594d6c0714b77aefb6193ee2890a
  • Loading branch information
Martin Yuan authored and facebook-github-bot committed Mar 12, 2024
1 parent 4fea983 commit 42eeebc
Show file tree
Hide file tree
Showing 4 changed files with 81 additions and 0 deletions.
1 change: 1 addition & 0 deletions examples/models/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@
"ic4": ("inception_v4", "InceptionV4Model"),
"resnet18": ("resnet", "ResNet18Model"),
"resnet50": ("resnet", "ResNet50Model"),
"llava": ("llava", "LlavaModel"),
}

__all__ = [
Expand Down
17 changes: 17 additions & 0 deletions examples/models/llava_encoder/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
## Summary
In this example, we initiate the process of running multi modality through ExecuTorch.
- Demonstrate how to export the image encoder model in the [LLava](https://github.com/haotian-liu/LLaVA) multimodal model.
- Provide TODO steps on how to use the exported .pte file and the existing [exported Llama2 model](https://github.com/pytorch/executorch/tree/main/examples/models/llama2), to build the multimodal pipeline.

## Instructions
Note that this folder does not host the pretrained LLava model.
- To have Llava available, follow the [Install instructions](https://github.com/haotian-liu/LLaVA?tab=readme-ov-file#install) in the LLava github. Follow the licence in the specific repo when using L
- Since the pytorch model version may not be updated, `cd executorch`, run `./install_requirements.sh`.
- Run `python3 -m examples.portable.scripts.export --model_name="llava_encoder"`. The llava_encoder.pte file will be generated.

## TODO
- Write the pipeline in cpp
- Have image and text prompts as inputs.
- Call image processing functions to preprocess the image tensor.
- Load the llava_encoder.pte model, run it using the image tensor.
- The output of the encoder can be combined with the prompt, as inputs to the llama model. Call functions in llama_runner.cpp to run the llama model and get outputs.
11 changes: 11 additions & 0 deletions examples/models/llava_encoder/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree.

from .model import LlavaModel

__all__ = [
LlavaModel,
]
52 changes: 52 additions & 0 deletions examples/models/llava_encoder/model.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree.

import torch

from examples.models.model_base import EagerModelBase
from llava.eval.run_llava import load_images, process_images
from llava.mm_utils import get_model_name_from_path

from llava.model.builder import load_pretrained_model
from torch import nn


class EncoderModel(nn.Module):
def __init__(self, llava_model):
super().__init__()
self.model_ = llava_model

def forward(self, images_tensor):
features = self.model_.get_model().get_vision_tower()(images_tensor)
features = self.model_.get_model().mm_projector(features)
return features


class LlavaModel(EagerModelBase):
def __init__(self):
model_path = "liuhaotian/llava-v1.5-7b"
tokenizer, self.model_, self.image_processor_, context_len = (
load_pretrained_model(
model_path=model_path,
model_base=None,
model_name=get_model_name_from_path(model_path),
)
)
self.device = "cpu"
self.model_.to(self.device)
self.dtype = torch.float32

def get_eager_model(self):
model = EncoderModel(self.model_)
return model

def get_example_inputs(self):
image_file = "https://llava-vl.github.io/static/images/view.jpg"
images = load_images([image_file])
images_tensor = process_images(
images, self.image_processor_, self.model_.config
).to(self.model_.device, dtype=torch.float32)
return (images_tensor,)

0 comments on commit 42eeebc

Please sign in to comment.