Skip to content

Commit 2a61fc0

Browse files
committed
docs: fix typo from block_to_swap to blocks_to_swap in README
1 parent 2a188f0 commit 2a61fc0

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

README.md

+4-4
Original file line numberDiff line numberDiff line change
@@ -68,11 +68,11 @@ When training LoRA for Text Encoder (without `--network_train_unet_only`), more
6868

6969
__Options for GPUs with less VRAM:__
7070

71-
By specifying `--block_to_swap`, you can save VRAM by swapping some blocks between CPU and GPU. See [FLUX.1 fine-tuning](#flux1-fine-tuning) for details.
71+
By specifying `--blocks_to_swap`, you can save VRAM by swapping some blocks between CPU and GPU. See [FLUX.1 fine-tuning](#flux1-fine-tuning) for details.
7272

73-
Specify a number like `--block_to_swap 10`. A larger number will swap more blocks, saving more VRAM, but training will be slower. In FLUX.1, you can swap up to 35 blocks.
73+
Specify a number like `--blocks_to_swap 10`. A larger number will swap more blocks, saving more VRAM, but training will be slower. In FLUX.1, you can swap up to 35 blocks.
7474

75-
`--cpu_offload_checkpointing` offloads gradient checkpointing to CPU. This reduces up to 1GB of VRAM usage but slows down the training by about 15%. Cannot be used with `--block_to_swap`.
75+
`--cpu_offload_checkpointing` offloads gradient checkpointing to CPU. This reduces up to 1GB of VRAM usage but slows down the training by about 15%. Cannot be used with `--blocks_to_swap`.
7676

7777
Adafactor optimizer may reduce the VRAM usage than 8bit AdamW. Please use settings like below:
7878

@@ -82,7 +82,7 @@ Adafactor optimizer may reduce the VRAM usage than 8bit AdamW. Please use settin
8282

8383
The training can be done with 16GB VRAM GPUs with the batch size of 1. Please change your dataset configuration.
8484

85-
The training can be done with 12GB VRAM GPUs with `--block_to_swap 16` with 8bit AdamW. Please use settings like below:
85+
The training can be done with 12GB VRAM GPUs with `--blocks_to_swap 16` with 8bit AdamW. Please use settings like below:
8686

8787
```
8888
--blocks_to_swap 16

0 commit comments

Comments
 (0)