diff --git a/python/llm/example/GPU/PyTorch-Models/Model/llama3.2-vision/README.md b/python/llm/example/GPU/PyTorch-Models/Model/llama3.2-vision/README.md
new file mode 100644
index 00000000000..74b315340b1
--- /dev/null
+++ b/python/llm/example/GPU/PyTorch-Models/Model/llama3.2-vision/README.md
@@ -0,0 +1,134 @@
+# Llama3.2-Vision
+In this directory, you will find examples on how you could use IPEX-LLM `optimize_model` API to accelerate Llama3.2-Vision models on [Intel GPUs](../../../README.md). For illustration purposes, we utilize the [meta-llama/Llama-3.2-11B-Vision-Instruct](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct) as a reference Llama3.2-Vision model.
+
+## 0. Requirements
+To run these examples with IPEX-LLM on Intel GPUs, we have some recommended requirements for your machine, please refer to [here](../../../README.md#requirements) for more information.
+
+## Example: Predict Tokens using `generate()` API
+In the example [generate.py](./generate.py), we show a basic use case for a Llama3.2-Vision model to predict the next N tokens using `generate()` API, with IPEX-LLM 'optimize_model' API on Intel GPUs.
+### 1. Install
+#### 1.1 Installation on Linux
+We suggest using conda to manage environment:
+```bash
+conda create -n llm python=3.11
+conda activate llm
+# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
+pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
+
+pip install transformers==4.45.0
+```
+
+#### 1.2 Installation on Windows
+We suggest using conda to manage environment:
+```bash
+conda create -n llm python=3.11 libuv
+conda activate llm
+
+# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
+pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
+
+pip install transformers==4.45.0
+```
+
+### 2. Configures OneAPI environment variables for Linux
+
+> [!NOTE]
+> Skip this step if you are running on Windows.
+
+This is a required step on Linux for APT or offline installed oneAPI. Skip this step for PIP-installed oneAPI.
+
+```bash
+source /opt/intel/oneapi/setvars.sh
+```
+
+### 3. Runtime Configurations
+For optimal performance, it is recommended to set several environment variables. Please check out the suggestions based on your device.
+#### 3.1 Configurations for Linux
+
+
+For Intel Arc™ A-Series Graphics and Intel Data Center GPU Flex Series
+
+```bash
+export USE_XETLA=OFF
+export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
+export SYCL_CACHE_PERSISTENT=1
+```
+
+
+
+
+
+For Intel Data Center GPU Max Series
+
+```bash
+export LD_PRELOAD=${LD_PRELOAD}:${CONDA_PREFIX}/lib/libtcmalloc.so
+export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
+export SYCL_CACHE_PERSISTENT=1
+export ENABLE_SDP_FUSION=1
+```
+> Note: Please note that `libtcmalloc.so` can be installed by `conda install -c conda-forge -y gperftools=2.10`.
+
+
+
+
+For Intel iGPU
+
+```bash
+export SYCL_CACHE_PERSISTENT=1
+export BIGDL_LLM_XMX_DISABLED=1
+```
+
+
+
+#### 3.2 Configurations for Windows
+
+
+For Intel iGPU
+
+```cmd
+set SYCL_CACHE_PERSISTENT=1
+set BIGDL_LLM_XMX_DISABLED=1
+```
+
+
+
+
+
+For Intel Arc™ A-Series Graphics
+
+```cmd
+set SYCL_CACHE_PERSISTENT=1
+```
+
+
+
+> [!NOTE]
+> For the first time that each model runs on Intel iGPU/Intel Arc™ A300-Series or Pro A60, it may take several minutes to compile.
+### 4. Running examples
+
+```
+python ./generate.py
+```
+
+Arguments info:
+- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the Llama3.2-Vision model (e.g. `meta-llama/Llama-3.2-11B-Vision-Instruct`) to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'meta-llama/Llama-3.2-11B-Vision-Instruct'`.
+- `--image-url-or-path IMAGE_URL_OR_PATH`: argument defining the image to be infered. It is default to be `'https://hf-mirror.com/datasets/huggingface/documentation-images/resolve/0052a70beed5bf71b92610a43a52df6d286cd5f3/diffusers/rabbit.jpg'`.
+- `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'Describe image in detail'`.
+- `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`.
+
+#### Sample Output
+#### [meta-llama/Llama-3.2-11B-Vision-Instruct](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct)
+
+```log
+Inference time: xxxx s
+-------------------- Prompt --------------------
+Describe image in detail
+-------------------- Output --------------------
+This image features a charming anthropomorphic rabbit standing on a dirt path, surrounded by a picturesque rural landscape.
+
+The rabbit, with its light brown fur and distinctive large
+```
+
+The sample input image is:
+
+
diff --git a/python/llm/example/GPU/PyTorch-Models/Model/llama3.2-vision/generate.py b/python/llm/example/GPU/PyTorch-Models/Model/llama3.2-vision/generate.py
new file mode 100644
index 00000000000..b424461f4ef
--- /dev/null
+++ b/python/llm/example/GPU/PyTorch-Models/Model/llama3.2-vision/generate.py
@@ -0,0 +1,77 @@
+#
+# Copyright 2016 The BigDL Authors.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+import argparse
+import os
+
+import requests
+import time
+import torch
+from PIL import Image
+from transformers import MllamaForConditionalGeneration, AutoProcessor
+
+from ipex_llm import optimize_model
+
+if __name__ == '__main__':
+ parser = argparse.ArgumentParser(description='Predict Tokens using `generate()` API for Llama3.2-Vision model')
+ parser.add_argument('--repo-id-or-model-path', type=str, default="meta-llama/Llama-3.2-11B-Vision-Instruct",
+ help='The huggingface repo id for the Llama3.2-Vision model to be downloaded'
+ ', or the path to the huggingface checkpoint folder')
+ parser.add_argument('--image-url-or-path', type=str,
+ default='https://hf-mirror.com/datasets/huggingface/documentation-images/resolve/0052a70beed5bf71b92610a43a52df6d286cd5f3/diffusers/rabbit.jpg',
+ help='The URL or path to the image to infer')
+ parser.add_argument('--prompt', type=str, default="Describe image in detail",
+ help='Prompt to infer')
+ parser.add_argument('--n-predict', type=int, default=32,
+ help='Max tokens to predict')
+
+ args = parser.parse_args()
+ model_path = args.repo_id_or_model_path
+ image_path = args.image_url_or_path
+ prompt = args.prompt
+
+ model = MllamaForConditionalGeneration.from_pretrained(model_path)
+ model = optimize_model(model, modules_to_not_convert=["multi_modal_projector"])
+ model = model.half().eval()
+ model = model.to('xpu')
+
+ processor = AutoProcessor.from_pretrained(model_path)
+
+ messages = [
+ {
+ "role": "user",
+ "content": [
+ {"type": "image"},
+ {"type": "text", "text": prompt}
+ ]
+ }
+ ]
+ text = processor.apply_chat_template(messages, add_generation_prompt=True)
+
+ if os.path.exists(image_path):
+ image = Image.open(image_path)
+ else:
+ image = Image.open(requests.get(image_path, stream=True).raw)
+
+ inputs = processor(text=text, images=image, return_tensors="pt").to(model.device)
+
+ with torch.inference_mode():
+ for i in range(3):
+ st = time.time()
+ output = model.generate(**inputs, do_sample=False, max_new_tokens=args.n_predict)
+ et = time.time()
+ print(et - st)
+ print(processor.decode(output[0]))