You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
-`--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the InternLM2 model (e.g. `internlm/internlm2-chat-7b`) to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'internlm/internlm2-chat-7b'`.
50
+
-`--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'AI是什么?'`.
51
+
-`--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`.
35
52
36
53
#### 2.1 Client
37
54
On client Windows machines, it is recommended to run directly with full utilization of all cores:
38
55
```cmd
39
-
python ./generate.py --prompt 'What is AI?'
56
+
python ./generate.py --prompt 'What is AI?' --repo-id-or-model-path REPO_ID_OR_MODEL_PATH
40
57
```
41
58
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.
0 commit comments