You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: python/llm/example/GPU/HuggingFace/LLM/gemma2/README.md
+3-3
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@ In this directory, you will find examples on how you could apply IPEX-LLM INT4 o
4
4
## Requirements
5
5
To run these examples with IPEX-LLM on Intel GPUs, we have some recommended requirements for your machine, please refer to [here](../../../README.md#requirements) for more information.
6
6
7
-
**Important: According to Gemma2's requirement, please make sure you have installed `transformers==4.43.1` and `trl` to run the example.**
7
+
**Important: According to Gemma2's requirement, please make sure you have installed `transformers==4.43.1` and `trl<0.12.0` to run the example.**
8
8
9
9
## Example: Predict Tokens using `generate()` API
10
10
In the example [generate.py](./generate.py), we show a basic use case for a Gemma2 model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations on Intel GPUs.
0 commit comments