Skip to content

Commit 336dfc0

Browse files
songhappyrnwang04
andauthored
fix 1482 (#11661)
Co-authored-by: rnwang04 <[email protected]>
1 parent ba01b85 commit 336dfc0

File tree

4 files changed

+53
-6
lines changed
  • python/llm/example

4 files changed

+53
-6
lines changed

python/llm/example/CPU/HF-Transformers-AutoModels/Model/internlm2/README.md

+12-2
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,8 @@ conda activate llm
1818

1919
# install the latest ipex-llm nightly build with 'all' option
2020
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
21+
pip install transformers==3.36.2
22+
pip install huggingface_hub
2123
```
2224

2325
On Windows:
@@ -27,9 +29,17 @@ conda create -n llm python=3.11
2729
conda activate llm
2830
2931
pip install --pre --upgrade ipex-llm[all]
32+
pip install transformers==3.36.2
33+
pip install huggingface_hub
3034
```
3135

3236
### 2. Run
37+
Setup local MODEL_PATH and run python code to download the right version of model from hugginface.
38+
```python
39+
from huggingface_hub import snapshot_download
40+
snapshot_download(repo_id=repo_id, local_dir=MODEL_PATH, local_dir_use_symlinks=False, revision="v1.1.0")
41+
```
42+
Then run the example with the downloaded model
3343
```
3444
python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROMPT --n-predict N_PREDICT
3545
```
@@ -46,7 +56,7 @@ Arguments info:
4656
#### 2.1 Client
4757
On client Windows machine, it is recommended to run directly with full utilization of all cores:
4858
```cmd
49-
python ./generate.py
59+
python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH
5060
```
5161

5262
#### 2.2 Server
@@ -59,7 +69,7 @@ source ipex-llm-init
5969

6070
# e.g. for a server with 48 cores per socket
6171
export OMP_NUM_THREADS=48
62-
numactl -C 0-47 -m 0 python ./generate.py
72+
numactl -C 0-47 -m 0 python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH
6373
```
6474

6575
#### 2.3 Sample Output

python/llm/example/CPU/PyTorch-Models/Model/internlm2/README.md

+19-2
Original file line numberDiff line numberDiff line change
@@ -19,6 +19,8 @@ conda activate llm
1919

2020
# install the latest ipex-llm nightly build with 'all' option
2121
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
22+
pip install transformers==3.36.2
23+
pip install huggingface_hub
2224
```
2325

2426
On Windows:
@@ -28,15 +30,30 @@ conda create -n llm python=3.11
2830
conda activate llm
2931
3032
pip install --pre --upgrade ipex-llm[all]
33+
pip install transformers==3.36.2
34+
pip install huggingface_hub
3135
```
3236

3337
### 2. Run
3438
After setting up the Python environment, you could run the example by following steps.
39+
Setup local MODEL_PATH and run python code to download the right version of model from hugginface.
40+
```python
41+
from huggingface_hub import snapshot_download
42+
snapshot_download(repo_id=repo_id, local_dir=MODEL_PATH, local_dir_use_symlinks=False, revision="v1.1.0")
43+
```
44+
Then run the example with the downloaded model
45+
```
46+
python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROMPT --n-predict N_PREDICT
47+
```
48+
Arguments info:
49+
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the InternLM2 model (e.g. `internlm/internlm2-chat-7b`) to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'internlm/internlm2-chat-7b'`.
50+
- `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'AI是什么?'`.
51+
- `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`.
3552

3653
#### 2.1 Client
3754
On client Windows machines, it is recommended to run directly with full utilization of all cores:
3855
```cmd
39-
python ./generate.py --prompt 'What is AI?'
56+
python ./generate.py --prompt 'What is AI?' --repo-id-or-model-path REPO_ID_OR_MODEL_PATH
4057
```
4158
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.
4259

@@ -50,7 +67,7 @@ source ipex-llm-init
5067

5168
# e.g. for a server with 48 cores per socket
5269
export OMP_NUM_THREADS=48
53-
numactl -C 0-47 -m 0 python ./generate.py --prompt 'What is AI?'
70+
numactl -C 0-47 -m 0 python ./generate.py --prompt 'What is AI?' --repo-id-or-model-path REPO_ID_OR_MODEL_PATH
5471
```
5572
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.
5673

python/llm/example/GPU/HuggingFace/LLM/internlm2/README.md

+11-1
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,8 @@ conda create -n llm python=3.11
1414
conda activate llm
1515
# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
1616
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
17+
pip install transformers==3.36.2
18+
pip install huggingface_hub
1719
```
1820

1921
#### 1.2 Installation on Windows
@@ -24,6 +26,8 @@ conda activate llm
2426

2527
# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
2628
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
29+
pip install transformers==3.36.2
30+
pip install huggingface_hub
2731
```
2832

2933
### 2. Configures OneAPI environment variables for Linux
@@ -100,8 +104,14 @@ set SYCL_CACHE_PERSISTENT=1
100104

101105
> [!NOTE]
102106
> For the first time that each model runs on Intel iGPU/Intel Arc™ A300-Series or Pro A60, it may take several minutes to compile.
103-
### 4. Running examples
104107
108+
### 4. Running examples
109+
Setup local MODEL_PATH and run python code to download the right version of model from hugginface.
110+
```python
111+
from huggingface_hub import snapshot_download
112+
snapshot_download(repo_id=repo_id, local_dir=MODEL_PATH, local_dir_use_symlinks=False, revision="v1.1.0")
113+
```
114+
Then run the example with the downloaded model
105115
```
106116
python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROMPT --n-predict N_PREDICT
107117
```

python/llm/example/GPU/PyTorch-Models/Model/internlm2/README.md

+11-1
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,8 @@ conda create -n llm python=3.11
1414
conda activate llm
1515
# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
1616
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
17+
pip install transformers==3.36.2
18+
pip install huggingface_hub
1719
```
1820

1921
#### 1.2 Installation on Windows
@@ -24,6 +26,8 @@ conda activate llm
2426

2527
# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
2628
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
29+
pip install transformers==3.36.2
30+
pip install huggingface_hub
2731
```
2832

2933
### 2. Configures OneAPI environment variables for Linux
@@ -100,8 +104,14 @@ set SYCL_CACHE_PERSISTENT=1
100104

101105
> [!NOTE]
102106
> For the first time that each model runs on Intel iGPU/Intel Arc™ A300-Series or Pro A60, it may take several minutes to compile.
103-
### 4. Running examples
104107
108+
### 4. Running examples
109+
Setup local MODEL_PATH and run python code to download the right version of model from hugginface.
110+
```python
111+
from huggingface_hub import snapshot_download
112+
snapshot_download(repo_id=repo_id, local_dir=MODEL_PATH, local_dir_use_symlinks=False, revision="v1.1.0")
113+
```
114+
Then run the example with the downloaded model
105115
```
106116
python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROMPT --n-predict N_PREDICT
107117
```

0 commit comments

Comments
 (0)