Skip to content

Commit b60cae6

Browse files
iammerrickyusiwen
authored andcommitted
finetune : readme fix typo (ggml-org#3465)
Fix small typo
1 parent b6d62ea commit b60cae6

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

examples/finetune/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -61,7 +61,7 @@ For example to apply 40% of the 'shakespeare' LORA adapter, 80% of the 'bible' L
6161
--lora lora-open-llama-3b-v2-q8_0-yet-another-one-LATEST.bin
6262
```
6363

64-
The scale numbers don't need to add up to one, and you can also use numbers creater than 1 to further increase the influence of an adapter. But making the values to big will sometimes result in worse output. Play around to find good values.
64+
The scale numbers don't need to add up to one, and you can also use numbers greater than 1 to further increase the influence of an adapter. But making the values to big will sometimes result in worse output. Play around to find good values.
6565

6666
Gradient checkpointing reduces the memory requirements by ~50% but increases the runtime.
6767
If you have enough RAM, you can make finetuning a bit faster by disabling checkpointing with `--no-checkpointing`.

0 commit comments

Comments
 (0)