Skip to content

Commit 3df6195

Browse files
authored
Fix application quickstart (#12305)
* fix graphrag quickstart * fix axolotl quickstart * fix ragflow quickstart * fix ragflow quickstart * fix graphrag toc * fix comments * fix comment * fix comments
1 parent 4892df6 commit 3df6195

File tree

4 files changed

+29
-12
lines changed

4 files changed

+29
-12
lines changed

docs/mddocs/Quickstart/axolotl_quickstart.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -45,10 +45,10 @@ Install [axolotl v0.4.0](https://github.com/OpenAccess-AI-Collective/axolotl/tre
4545

4646
```bash
4747
# install axolotl v0.4.0
48-
git clone https://github.com/OpenAccess-AI-Collective/axolotl/tree/v0.4.0
48+
git clone https://github.com/OpenAccess-AI-Collective/axolotl -b v0.4.0
4949
cd axolotl
5050
# replace requirements.txt
51-
remove requirements.txt
51+
rm requirements.txt
5252
wget -O requirements.txt https://raw.githubusercontent.com/intel-analytics/ipex-llm/main/python/llm/example/GPU/LLM-Finetuning/axolotl/requirements-xpu.txt
5353
pip install -e .
5454
pip install transformers==4.36.0

docs/mddocs/Quickstart/graphrag_quickstart.md

+2-1
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,8 @@ The [GraphRAG project](https://github.com/microsoft/graphrag) is designed to lev
1818

1919
Follow the steps in [Run Ollama with IPEX-LLM on Intel GPU Guide](./ollama_quickstart.md) to install `ipex-llm[cpp]==2.1.0` and run Ollama on Intel GPU. Ensure that `ollama serve` is running correctly and can be accessed through a local URL (e.g., `https://127.0.0.1:11434`).
2020

21-
**Please note that for GraphRAG, we highly recommand using the stable version of ipex-llm through `pip install ipex-llm[cpp]==2.1.0`**.
21+
> [!NOTE]
22+
> Please note that for GraphRAG, we highly recommand using the stable version of ipex-llm through `pip install ipex-llm[cpp]==2.1.0`.
2223
2324
### 2. Prepare LLM and Embedding Model
2425

docs/mddocs/Quickstart/ragflow_quickstart.md

+24-8
Original file line numberDiff line numberDiff line change
@@ -21,6 +21,7 @@
2121
- [Pull Model](./ragflow_quickstart.md#2-pull-model)
2222
- [Start `RAGFlow` Service](./ragflow_quickstart.md#3-start-ragflow-service)
2323
- [Using `RAGFlow`](./ragflow_quickstart.md#4-using-ragflow)
24+
- [Troubleshooting](./ragflow_quickstart.md#5-troubleshooting)
2425

2526
## Quickstart
2627

@@ -71,23 +72,23 @@ Now we need to pull a model for RAG using Ollama. Here we use [Qwen/Qwen2-7B](ht
7172
You can either clone the repository or download the source zip from [github](https://github.com/infiniflow/ragflow/archive/refs/heads/main.zip):
7273

7374
```bash
74-
$ git clone https://github.com/infiniflow/ragflow.git
75+
git clone https://github.com/infiniflow/ragflow.git
7576
```
7677

7778
#### 3.2 Environment Settings
7879

7980
Ensure `vm.max_map_count` is set to at least 262144. To check the current value of `vm.max_map_count`, use:
8081

8182
```bash
82-
$ sysctl vm.max_map_count
83+
sysctl vm.max_map_count
8384
```
8485

8586
##### Changing `vm.max_map_count`
8687

8788
To set the value temporarily, use:
8889

8990
```bash
90-
$ sudo sysctl -w vm.max_map_count=262144
91+
sudo sysctl -w vm.max_map_count=262144
9192
```
9293

9394
To make the change permanent and ensure it persists after a reboot, add or update the following line in `/etc/sysctl.conf`:
@@ -104,10 +105,10 @@ Build the pre-built Docker images and start up the server:
104105
> Running the following commands automatically downloads the *dev* version RAGFlow Docker image. To download and run a specified Docker version, update `RAGFLOW_VERSION` in **docker/.env** to the intended version, for example `RAGFLOW_VERSION=v0.7.0`, before running the following commands.
105106
106107
```bash
107-
$ export no_proxy=localhost,127.0.0.1
108-
$ cd ragflow/docker
109-
$ chmod +x ./entrypoint.sh
110-
$ docker compose up -d
108+
export no_proxy=localhost,127.0.0.1
109+
cd ragflow/docker
110+
chmod +x ./entrypoint.sh
111+
docker compose up -d
111112
```
112113

113114
> [!NOTE]
@@ -116,7 +117,7 @@ $ docker compose up -d
116117
Check the server status after having the server up and running:
117118

118119
```bash
119-
$ docker logs -f ragflow-server
120+
docker logs -f ragflow-server
120121
```
121122

122123
Upon successful deployment, you will see logs in the terminal similar to the following:
@@ -237,3 +238,18 @@ Input your questions into the **Message Resume Assistant** textbox at the bottom
237238
#### Exit
238239

239240
To shut down the RAGFlow server, use **Ctrl+C** in the terminal where the Ragflow server is runing, then close your browser tab.
241+
242+
### 5. Troubleshooting
243+
244+
#### Stuck when parsing files `Node <Urllib3HttpNode(http://es01:9200)> has failed for xx times in a row, putting on 30 second timeout`
245+
246+
This is because there's no enough space on the disk and the Docker container stop working. Please left enough space on the disk and make sure the disk usage is below 90%.
247+
248+
#### `Max retries exceeded with url: /encodings/cl100k_base.tiktoken` while starting the RAGFlow service through Docker
249+
250+
This may caused by network problem. To resolve this, you could try to:
251+
252+
1. Attach to the Docker container by `docker exec -it ragflow-server /bin/bash`
253+
2. Set environment variables like `HTTP_PROXY`, `HTTPS_PROXY`, and `NO_PROXY` at the beginning of the `/ragflow/entrypoint.sh`.
254+
3. Stop the service by `docker compose stop`.
255+
4. Restart the service by `docker compose start`.

python/llm/example/GPU/LLM-Finetuning/axolotl/requirements-xpu.txt

+1-1
Original file line numberDiff line numberDiff line change
@@ -40,4 +40,4 @@ s3fs
4040
gcsfs
4141
# adlfs
4242

43-
trl>=0.7.9
43+
trl>=0.7.9, <=0.9.6

0 commit comments

Comments
 (0)