Skip to content

Commit 164f47a

Browse files
authored
MiniCPM-V-2 & MiniCPM-Llama3-V-2_5 example updates (#11988)
* minicpm example updates * --stream
1 parent 2e54f44 commit 164f47a

File tree

5 files changed

+143
-60
lines changed

5 files changed

+143
-60
lines changed

python/llm/example/GPU/HuggingFace/Multimodal/MiniCPM-Llama3-V-2_5/README.md

+23-9
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ In this directory, you will find examples on how you could apply IPEX-LLM INT4 o
55
To run these examples with IPEX-LLM on Intel GPUs, we have some recommended requirements for your machine, please refer to [here](../../../README.md#requirements) for more information.
66

77
## Example: Predict Tokens using `chat()` API
8-
In the example [generate.py](./generate.py), we show a basic use case for a MiniCPM-Llama3-V-2_5 model to predict the next N tokens using `chat()` API, with IPEX-LLM INT4 optimizations on Intel GPUs.
8+
In the example [chat.py](./chat.py), we show a basic use case for a MiniCPM-Llama3-V-2_5 model to predict the next N tokens using `chat()` API, with IPEX-LLM INT4 optimizations on Intel GPUs.
99
### 1. Install
1010
#### 1.1 Installation on Linux
1111
We suggest using conda to manage environment:
@@ -106,28 +106,42 @@ set SYCL_CACHE_PERSISTENT=1
106106
> For the first time that each model runs on Intel iGPU/Intel Arc™ A300-Series or Pro A60, it may take several minutes to compile.
107107
### 4. Running examples
108108

109-
```
110-
python ./generate.py --prompt 'What is in the image?'
111-
```
109+
- chat without streaming mode:
110+
```
111+
python ./chat.py --prompt 'What is in the image?'
112+
```
113+
- chat in streaming mode:
114+
```
115+
python ./chat.py --prompt 'What is in the image?' --stream
116+
```
112117

113118
Arguments info:
114119
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the MiniCPM-Llama3-V-2_5 (e.g. `openbmb/MiniCPM-Llama3-V-2_5`) to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'openbmb/MiniCPM-Llama3-V-2_5'`.
115120
- `--image-url-or-path IMAGE_URL_OR_PATH`: argument defining the image to be infered. It is default to be `'http://farm6.staticflickr.com/5268/5602445367_3504763978_z.jpg'`.
116121
- `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'What is in the image?'`.
117-
- `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`.
122+
- `--stream`: flag to chat in streaming mode
118123

119124
#### Sample Output
120125

121126
#### [openbmb/MiniCPM-Llama3-V-2_5](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5)
122127

123128
```log
124129
Inference time: xxxx s
125-
-------------------- Input --------------------
130+
-------------------- Input Image --------------------
126131
http://farm6.staticflickr.com/5268/5602445367_3504763978_z.jpg
127-
-------------------- Prompt --------------------
132+
-------------------- Input Prompt --------------------
128133
What is in the image?
129-
-------------------- Output --------------------
130-
The image features a young child holding a white teddy bear. The teddy bear is dressed in a pink outfit. The child appears to be outdoors, with a stone wall and some red flowers in the background.
134+
-------------------- Chat Output --------------------
135+
The image features a young child holding a white teddy bear. The teddy bear is dressed in a pink dress with a ribbon on it. The child appears to be smiling and enjoying the moment.
136+
```
137+
```log
138+
Inference time: xxxx s
139+
-------------------- Input Image --------------------
140+
http://farm6.staticflickr.com/5268/5602445367_3504763978_z.jpg
141+
-------------------- Input Prompt --------------------
142+
图片里有什么?
143+
-------------------- Chat Output --------------------
144+
图片中有一个小孩,手里拿着一个白色的玩具熊。这个孩子看起来很开心,正在微笑并与玩具互动。背景包括红色的花朵和石墙,为这个场景增添了色彩和质感。
131145
```
132146

133147
The sample input image is (which is fetched from [COCO dataset](https://cocodataset.org/#explore?id=264959)):

python/llm/example/GPU/HuggingFace/Multimodal/MiniCPM-Llama3-V-2_5/generate.py python/llm/example/GPU/HuggingFace/Multimodal/MiniCPM-Llama3-V-2_5/chat.py

+50-25
Original file line numberDiff line numberDiff line change
@@ -14,10 +14,12 @@
1414
# limitations under the License.
1515
#
1616

17+
1718
import os
1819
import time
1920
import argparse
2021
import requests
22+
import torch
2123
from PIL import Image
2224
from ipex_llm.transformers import AutoModel
2325
from transformers import AutoTokenizer
@@ -33,8 +35,8 @@
3335
help='The URL or path to the image to infer')
3436
parser.add_argument('--prompt', type=str, default="What is in the image?",
3537
help='Prompt to infer')
36-
parser.add_argument('--n-predict', type=int, default=32,
37-
help='Max tokens to predict')
38+
parser.add_argument('--stream', action='store_true',
39+
help='Whether to chat in streaming mode')
3840

3941
args = parser.parse_args()
4042
model_path = args.repo_id_or_model_path
@@ -45,11 +47,12 @@
4547
# When running LLMs on Intel iGPUs for Windows users, we recommend setting `cpu_embedding=True` in the from_pretrained function.
4648
# This will allow the memory-intensive embedding layer to utilize the CPU instead of iGPU.
4749
model = AutoModel.from_pretrained(model_path,
48-
load_in_4bit=True,
49-
optimize_model=False,
50+
load_in_low_bit="sym_int4",
51+
optimize_model=True,
5052
trust_remote_code=True,
51-
use_cache=True)
52-
model = model.half().to(device='xpu')
53+
use_cache=True,
54+
modules_to_not_convert=["vpm", "resampler"])
55+
model = model.half().to('xpu')
5356
tokenizer = AutoTokenizer.from_pretrained(model_path,
5457
trust_remote_code=True)
5558
model.eval()
@@ -61,23 +64,45 @@
6164
image = Image.open(requests.get(image_path, stream=True).raw).convert('RGB')
6265

6366
# Generate predicted tokens
64-
# here the prompt tuning refers to https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5/blob/main/README.md
65-
msgs = [{'role': 'user', 'content': args.prompt}]
66-
st = time.time()
67-
res = model.chat(
68-
image=image,
69-
msgs=msgs,
70-
context=None,
71-
tokenizer=tokenizer,
72-
sampling=False,
73-
temperature=0.7
67+
# here the prompt tuning refers to https://huggingface.co/openbmb/MiniCPM-V-2_6/blob/main/README.md
68+
msgs = [{'role': 'user', 'content': [image, args.prompt]}]
69+
70+
# ipex_llm model needs a warmup, then inference time can be accurate
71+
model.chat(
72+
image=None,
73+
msgs=msgs,
74+
tokenizer=tokenizer,
7475
)
75-
end = time.time()
76-
print(f'Inference time: {end-st} s')
77-
print('-'*20, 'Input', '-'*20)
78-
print(image_path)
79-
print('-'*20, 'Prompt', '-'*20)
80-
print(args.prompt)
81-
output_str = res
82-
print('-'*20, 'Output', '-'*20)
83-
print(output_str)
76+
77+
if args.stream:
78+
res = model.chat(
79+
image=None,
80+
msgs=msgs,
81+
tokenizer=tokenizer,
82+
stream=True
83+
)
84+
85+
print('-'*20, 'Input Image', '-'*20)
86+
print(image_path)
87+
print('-'*20, 'Input Prompt', '-'*20)
88+
print(args.prompt)
89+
print('-'*20, 'Stream Chat Output', '-'*20)
90+
for new_text in res:
91+
print(new_text, flush=True, end='')
92+
else:
93+
st = time.time()
94+
res = model.chat(
95+
image=None,
96+
msgs=msgs,
97+
tokenizer=tokenizer,
98+
)
99+
torch.xpu.synchronize()
100+
end = time.time()
101+
102+
print(f'Inference time: {end-st} s')
103+
print('-'*20, 'Input Image', '-'*20)
104+
print(image_path)
105+
print('-'*20, 'Input Prompt', '-'*20)
106+
print(args.prompt)
107+
print('-'*20, 'Chat Output', '-'*20)
108+
print(res)

python/llm/example/GPU/HuggingFace/Multimodal/MiniCPM-V-2/README.md

+22-9
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ In this directory, you will find examples on how you could apply IPEX-LLM INT4 o
55
To run these examples with IPEX-LLM on Intel GPUs, we have some recommended requirements for your machine, please refer to [here](../../../README.md#requirements) for more information.
66

77
## Example: Predict Tokens using `chat()` API
8-
In the example [generate.py](./generate.py), we show a basic use case for a MiniCPM-V-2 model to predict the next N tokens using `chat()` API, with IPEX-LLM INT4 optimizations on Intel GPUs.
8+
In the example [chat.py](./chat.py), we show a basic use case for a MiniCPM-V-2 model to predict the next N tokens using `chat()` API, with IPEX-LLM INT4 optimizations on Intel GPUs.
99
### 1. Install
1010
#### 1.1 Installation on Linux
1111
We suggest using conda to manage environment:
@@ -106,28 +106,41 @@ set SYCL_CACHE_PERSISTENT=1
106106
> For the first time that each model runs on Intel iGPU/Intel Arc™ A300-Series or Pro A60, it may take several minutes to compile.
107107
### 4. Running examples
108108

109-
```
110-
python ./generate.py --prompt 'What is in the image?'
111-
```
109+
- chat without streaming mode:
110+
```
111+
python ./chat.py --prompt 'What is in the image?'
112+
```
113+
- chat in streaming mode:
114+
```
115+
python ./chat.py --prompt 'What is in the image?' --stream
116+
```
112117

113118
Arguments info:
114119
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the MiniCPM-V-2 (e.g. `openbmb/MiniCPM-V-2`) to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'openbmb/MiniCPM-V-2'`.
115120
- `--image-url-or-path IMAGE_URL_OR_PATH`: argument defining the image to be infered. It is default to be `'http://farm6.staticflickr.com/5268/5602445367_3504763978_z.jpg'`.
116121
- `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'What is in the image?'`.
117-
- `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`.
122+
- `--stream`: flag to chat in streaming mode
118123

119124
#### Sample Output
120125

121126
#### [openbmb/MiniCPM-V-2](https://huggingface.co/openbmb/MiniCPM-V-2)
122127

123128
```log
124129
Inference time: xxxx s
125-
-------------------- Input --------------------
130+
-------------------- Input Image --------------------
126131
http://farm6.staticflickr.com/5268/5602445367_3504763978_z.jpg
127-
-------------------- Prompt --------------------
132+
-------------------- Input Prompt --------------------
128133
What is in the image?
129-
-------------------- Output --------------------
130-
In the image, there is a young child holding a teddy bear. The teddy bear appears to be dressed in a pink tutu. The child is also wearing a red and white striped dress. The background of the image includes a stone wall and some red flowers.
134+
-------------------- Chat Output --------------------
135+
In the image, there is a young child holding a teddy bear. The teddy bear is dressed in a pink tutu. The child is also wearing a red and white striped dress. The background of the image features a stone wall and some red flowers.
136+
```
137+
```log
138+
-------------------- Input Image --------------------
139+
http://farm6.staticflickr.com/5268/5602445367_3504763978_z.jpg
140+
-------------------- Input Prompt --------------------
141+
图片里有什么?
142+
-------------------- Chat Output --------------------
143+
图中是一个小女孩,她手里拿着一只粉白相间的泰迪熊。
131144
```
132145

133146
The sample input image is (which is fetched from [COCO dataset](https://cocodataset.org/#explore?id=264959)):

python/llm/example/GPU/HuggingFace/Multimodal/MiniCPM-V-2/generate.py python/llm/example/GPU/HuggingFace/Multimodal/MiniCPM-V-2/chat.py

+46-15
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,7 @@
1515
#
1616

1717

18+
1819
from typing import List, Tuple, Optional, Union
1920
import math
2021
import timm
@@ -110,6 +111,7 @@ def _pos_embed(self, x: torch.Tensor) -> torch.Tensor:
110111
import time
111112
import argparse
112113
import requests
114+
import torch
113115
from PIL import Image
114116
from ipex_llm.transformers import AutoModel
115117
from transformers import AutoTokenizer
@@ -125,8 +127,8 @@ def _pos_embed(self, x: torch.Tensor) -> torch.Tensor:
125127
help='The URL or path to the image to infer')
126128
parser.add_argument('--prompt', type=str, default="What is in the image?",
127129
help='Prompt to infer')
128-
parser.add_argument('--n-predict', type=int, default=32,
129-
help='Max tokens to predict')
130+
parser.add_argument('--stream', action='store_true',
131+
help='Whether to chat in streaming mode')
130132

131133
args = parser.parse_args()
132134
model_path = args.repo_id_or_model_path
@@ -140,9 +142,9 @@ def _pos_embed(self, x: torch.Tensor) -> torch.Tensor:
140142
load_in_low_bit="asym_int4",
141143
optimize_model=True,
142144
trust_remote_code=True,
143-
modules_to_not_convert=["vpm", "resampler", "lm_head"],
144-
use_cache=True)
145-
model = model.half().to(device='xpu')
145+
use_cache=True,
146+
modules_to_not_convert=["vpm", "resampler"])
147+
model = model.half().to('xpu')
146148
tokenizer = AutoTokenizer.from_pretrained(model_path,
147149
trust_remote_code=True)
148150
model.eval()
@@ -156,7 +158,8 @@ def _pos_embed(self, x: torch.Tensor) -> torch.Tensor:
156158
# Generate predicted tokens
157159
# here the prompt tuning refers to https://huggingface.co/openbmb/MiniCPM-V-2/blob/main/README.md
158160
msgs = [{'role': 'user', 'content': args.prompt}]
159-
st = time.time()
161+
162+
# ipex_llm model needs a warmup, then inference time can be accurate
160163
res, context, _ = model.chat(
161164
image=image,
162165
msgs=msgs,
@@ -165,12 +168,40 @@ def _pos_embed(self, x: torch.Tensor) -> torch.Tensor:
165168
sampling=False,
166169
temperature=0.7
167170
)
168-
end = time.time()
169-
print(f'Inference time: {end-st} s')
170-
print('-'*20, 'Input', '-'*20)
171-
print(image_path)
172-
print('-'*20, 'Prompt', '-'*20)
173-
print(args.prompt)
174-
output_str = res
175-
print('-'*20, 'Output', '-'*20)
176-
print(output_str)
171+
if args.stream:
172+
res, context, _ = model.chat(
173+
image=image,
174+
msgs=msgs,
175+
context=None,
176+
tokenizer=tokenizer,
177+
sampling=False,
178+
temperature=0.7
179+
)
180+
181+
print('-'*20, 'Input Image', '-'*20)
182+
print(image_path)
183+
print('-'*20, 'Input Prompt', '-'*20)
184+
print(args.prompt)
185+
print('-'*20, 'Stream Chat Output', '-'*20)
186+
for new_text in res:
187+
print(new_text, flush=True, end='')
188+
else:
189+
st = time.time()
190+
res, context, _ = model.chat(
191+
image=image,
192+
msgs=msgs,
193+
context=None,
194+
tokenizer=tokenizer,
195+
sampling=False,
196+
temperature=0.7
197+
)
198+
torch.xpu.synchronize()
199+
end = time.time()
200+
201+
print(f'Inference time: {end-st} s')
202+
print('-'*20, 'Input Image', '-'*20)
203+
print(image_path)
204+
print('-'*20, 'Input Prompt', '-'*20)
205+
print(args.prompt)
206+
print('-'*20, 'Chat Output', '-'*20)
207+
print(res)

python/llm/example/GPU/HuggingFace/Multimodal/MiniCPM-V-2_6/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -108,11 +108,11 @@ set SYCL_CACHE_PERSISTENT=1
108108

109109
- chat without streaming mode:
110110
```
111-
python ./generate.py --prompt 'What is in the image?'
111+
python ./chat.py --prompt 'What is in the image?'
112112
```
113113
- chat in streaming mode:
114114
```
115-
python ./generate.py --prompt 'What is in the image?' --stream
115+
python ./chat.py --prompt 'What is in the image?' --stream
116116
```
117117

118118
> [!TIP]

0 commit comments

Comments
 (0)