Skip to content

Commit e235b26

Browse files
ggerganovcompilade
andauthored
py : switch to snake_case (#8305)
* py : switch to snake_case ggml-ci * cont ggml-ci * cont ggml-ci * cont : fix link * gguf-py : use snake_case in scripts entrypoint export * py : rename requirements for convert_legacy_llama.py Needed for scripts/check-requirements.sh --------- Co-authored-by: Francis Couture-Harpin <[email protected]>
1 parent f09b7cb commit e235b26

32 files changed

+69
-104
lines changed

Diff for: README.md

+4-4
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ Inference of Meta's [LLaMA](https://arxiv.org/abs/2302.13971) model (and others)
2626

2727
### Hot topics
2828

29-
- **`convert.py` has been deprecated and moved to `examples/convert-legacy-llama.py`, please use `convert-hf-to-gguf.py`** https://github.com/ggerganov/llama.cpp/pull/7430
29+
- **`convert.py` has been deprecated and moved to `examples/convert_legacy_llama.py`, please use `convert_hf_to_gguf.py`** https://github.com/ggerganov/llama.cpp/pull/7430
3030
- Initial Flash-Attention support: https://github.com/ggerganov/llama.cpp/pull/5021
3131
- BPE pre-tokenization support has been added: https://github.com/ggerganov/llama.cpp/pull/6920
3232
- MoE memory layout has been updated - reconvert models for `mmap` support and regenerate `imatrix` https://github.com/ggerganov/llama.cpp/pull/6387
@@ -636,8 +636,8 @@ Building the program with BLAS support may lead to some performance improvements
636636
637637
To obtain the official LLaMA 2 weights please see the <a href="#obtaining-and-using-the-facebook-llama-2-model">Obtaining and using the Facebook LLaMA 2 model</a> section. There is also a large selection of pre-quantized `gguf` models available on Hugging Face.
638638

639-
Note: `convert.py` has been moved to `examples/convert-legacy-llama.py` and shouldn't be used for anything other than `Llama/Llama2/Mistral` models and their derivatives.
640-
It does not support LLaMA 3, you can use `convert-hf-to-gguf.py` with LLaMA 3 downloaded from Hugging Face.
639+
Note: `convert.py` has been moved to `examples/convert_legacy_llama.py` and shouldn't be used for anything other than `Llama/Llama2/Mistral` models and their derivatives.
640+
It does not support LLaMA 3, you can use `convert_hf_to_gguf.py` with LLaMA 3 downloaded from Hugging Face.
641641

642642
```bash
643643
# obtain the official LLaMA model weights and place them in ./models
@@ -654,7 +654,7 @@ ls ./models
654654
python3 -m pip install -r requirements.txt
655655

656656
# convert the model to ggml FP16 format
657-
python3 convert-hf-to-gguf.py models/mymodel/
657+
python3 convert_hf_to_gguf.py models/mymodel/
658658

659659
# quantize the model to 4-bits (using Q4_K_M method)
660660
./llama-quantize ./models/mymodel/ggml-model-f16.gguf ./models/mymodel/ggml-model-Q4_K_M.gguf Q4_K_M

Diff for: ci/run.sh

+3-3
Original file line numberDiff line numberDiff line change
@@ -287,7 +287,7 @@ function gg_run_open_llama_7b_v2 {
287287
(time cmake -DCMAKE_BUILD_TYPE=Release ${CMAKE_EXTRA} -DGGML_CUDA=1 .. ) 2>&1 | tee -a $OUT/${ci}-cmake.log
288288
(time make -j ) 2>&1 | tee -a $OUT/${ci}-make.log
289289

290-
python3 ../examples/convert-legacy-llama.py ${path_models} --outfile ${path_models}/ggml-model-f16.gguf
290+
python3 ../examples/convert_legacy_llama.py ${path_models} --outfile ${path_models}/ggml-model-f16.gguf
291291

292292
model_f16="${path_models}/ggml-model-f16.gguf"
293293
model_q8_0="${path_models}/ggml-model-q8_0.gguf"
@@ -421,7 +421,7 @@ function gg_run_pythia_1_4b {
421421
(time cmake -DCMAKE_BUILD_TYPE=Release ${CMAKE_EXTRA} .. ) 2>&1 | tee -a $OUT/${ci}-cmake.log
422422
(time make -j ) 2>&1 | tee -a $OUT/${ci}-make.log
423423

424-
python3 ../convert-hf-to-gguf.py ${path_models} --outfile ${path_models}/ggml-model-f16.gguf
424+
python3 ../convert_hf_to_gguf.py ${path_models} --outfile ${path_models}/ggml-model-f16.gguf
425425

426426
model_f16="${path_models}/ggml-model-f16.gguf"
427427
model_q8_0="${path_models}/ggml-model-q8_0.gguf"
@@ -553,7 +553,7 @@ function gg_run_pythia_2_8b {
553553
(time cmake -DCMAKE_BUILD_TYPE=Release ${CMAKE_EXTRA} -DGGML_CUDA=1 .. ) 2>&1 | tee -a $OUT/${ci}-cmake.log
554554
(time make -j ) 2>&1 | tee -a $OUT/${ci}-make.log
555555

556-
python3 ../convert-hf-to-gguf.py ${path_models} --outfile ${path_models}/ggml-model-f16.gguf
556+
python3 ../convert_hf_to_gguf.py ${path_models} --outfile ${path_models}/ggml-model-f16.gguf
557557

558558
model_f16="${path_models}/ggml-model-f16.gguf"
559559
model_q8_0="${path_models}/ggml-model-q8_0.gguf"

Diff for: convert_hf_to_gguf.py

+4-4
Original file line numberDiff line numberDiff line change
@@ -404,7 +404,7 @@ def get_vocab_base(self) -> tuple[list[str], list[int], str]:
404404

405405
return tokens, toktypes, tokpre
406406

407-
# NOTE: this function is generated by convert-hf-to-gguf-update.py
407+
# NOTE: this function is generated by convert_hf_to_gguf_update.py
408408
# do not modify it manually!
409409
# ref: https://github.com/ggerganov/llama.cpp/pull/6920
410410
# Marker: Start get_vocab_base_pre
@@ -424,7 +424,7 @@ def get_vocab_base_pre(self, tokenizer) -> str:
424424

425425
res = None
426426

427-
# NOTE: if you get an error here, you need to update the convert-hf-to-gguf-update.py script
427+
# NOTE: if you get an error here, you need to update the convert_hf_to_gguf_update.py script
428428
# or pull the latest version of the model from Huggingface
429429
# don't edit the hashes manually!
430430
if chkhsh == "0ef9807a4087ebef797fc749390439009c3b9eda9ad1a097abbe738f486c01e5":
@@ -499,9 +499,9 @@ def get_vocab_base_pre(self, tokenizer) -> str:
499499
logger.warning("**************************************************************************************")
500500
logger.warning("** WARNING: The BPE pre-tokenizer was not recognized!")
501501
logger.warning("** There are 2 possible reasons for this:")
502-
logger.warning("** - the model has not been added to convert-hf-to-gguf-update.py yet")
502+
logger.warning("** - the model has not been added to convert_hf_to_gguf_update.py yet")
503503
logger.warning("** - the pre-tokenization config has changed upstream")
504-
logger.warning("** Check your model files and convert-hf-to-gguf-update.py and update them accordingly.")
504+
logger.warning("** Check your model files and convert_hf_to_gguf_update.py and update them accordingly.")
505505
logger.warning("** ref: https://github.com/ggerganov/llama.cpp/pull/6920")
506506
logger.warning("**")
507507
logger.warning(f"** chkhsh: {chkhsh}")

Diff for: convert_hf_to_gguf_update.py

+13-13
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
# -*- coding: utf-8 -*-
33

44
# This script downloads the tokenizer models of the specified models from Huggingface and
5-
# generates the get_vocab_base_pre() function for convert-hf-to-gguf.py
5+
# generates the get_vocab_base_pre() function for convert_hf_to_gguf.py
66
#
77
# This is necessary in order to analyze the type of pre-tokenizer used by the model and
88
# provide the necessary information to llama.cpp via the GGUF header in order to implement
@@ -15,9 +15,9 @@
1515
# - Add a new model to the "models" list
1616
# - Run the script with your huggingface token:
1717
#
18-
# python3 convert-hf-to-gguf-update.py <huggingface_token>
18+
# python3 convert_hf_to_gguf_update.py <huggingface_token>
1919
#
20-
# - Copy-paste the generated get_vocab_base_pre() function into convert-hf-to-gguf.py
20+
# - Copy-paste the generated get_vocab_base_pre() function into convert_hf_to_gguf.py
2121
# - Update llama.cpp with the new pre-tokenizer if necessary
2222
#
2323
# TODO: generate tokenizer tests for llama.cpp
@@ -37,7 +37,7 @@
3737
from transformers import AutoTokenizer
3838

3939
logging.basicConfig(level=logging.DEBUG)
40-
logger = logging.getLogger("convert-hf-to-gguf-update")
40+
logger = logging.getLogger("convert_hf_to_gguf_update")
4141
sess = requests.Session()
4242

4343

@@ -56,10 +56,10 @@ class TOKENIZER_TYPE(IntEnum):
5656
token = sys.argv[1]
5757
if not token.startswith("hf_"):
5858
logger.info("Huggingface token seems invalid")
59-
logger.info("Usage: python convert-hf-to-gguf-update.py <huggingface_token>")
59+
logger.info("Usage: python convert_hf_to_gguf_update.py <huggingface_token>")
6060
sys.exit(1)
6161
else:
62-
logger.info("Usage: python convert-hf-to-gguf-update.py <huggingface_token>")
62+
logger.info("Usage: python convert_hf_to_gguf_update.py <huggingface_token>")
6363
sys.exit(1)
6464

6565
# TODO: add models here, base models preferred
@@ -134,7 +134,7 @@ def download_model(model):
134134
logger.error(f"Failed to download model {model['name']}. Error: {e}")
135135

136136

137-
# generate the source code for the convert-hf-to-gguf.py:get_vocab_base_pre() function:
137+
# generate the source code for the convert_hf_to_gguf.py:get_vocab_base_pre() function:
138138

139139
src_ifs = ""
140140
for model in models:
@@ -201,7 +201,7 @@ def get_vocab_base_pre(self, tokenizer) -> str:
201201
202202
res = None
203203
204-
# NOTE: if you get an error here, you need to update the convert-hf-to-gguf-update.py script
204+
# NOTE: if you get an error here, you need to update the convert_hf_to_gguf_update.py script
205205
# or pull the latest version of the model from Huggingface
206206
# don't edit the hashes manually!
207207
{src_ifs}
@@ -210,9 +210,9 @@ def get_vocab_base_pre(self, tokenizer) -> str:
210210
logger.warning("**************************************************************************************")
211211
logger.warning("** WARNING: The BPE pre-tokenizer was not recognized!")
212212
logger.warning("** There are 2 possible reasons for this:")
213-
logger.warning("** - the model has not been added to convert-hf-to-gguf-update.py yet")
213+
logger.warning("** - the model has not been added to convert_hf_to_gguf_update.py yet")
214214
logger.warning("** - the pre-tokenization config has changed upstream")
215-
logger.warning("** Check your model files and convert-hf-to-gguf-update.py and update them accordingly.")
215+
logger.warning("** Check your model files and convert_hf_to_gguf_update.py and update them accordingly.")
216216
logger.warning("** ref: https://github.com/ggerganov/llama.cpp/pull/6920")
217217
logger.warning("**")
218218
logger.warning(f"** chkhsh: {{chkhsh}}")
@@ -226,7 +226,7 @@ def get_vocab_base_pre(self, tokenizer) -> str:
226226
return res
227227
"""
228228

229-
convert_py_pth = pathlib.Path("convert-hf-to-gguf.py")
229+
convert_py_pth = pathlib.Path("convert_hf_to_gguf.py")
230230
convert_py = convert_py_pth.read_text(encoding="utf-8")
231231
convert_py = re.sub(
232232
r"(# Marker: Start get_vocab_base_pre)(.+?)( +# Marker: End get_vocab_base_pre)",
@@ -237,7 +237,7 @@ def get_vocab_base_pre(self, tokenizer) -> str:
237237

238238
convert_py_pth.write_text(convert_py, encoding="utf-8")
239239

240-
logger.info("+++ convert-hf-to-gguf.py was updated")
240+
logger.info("+++ convert_hf_to_gguf.py was updated")
241241

242242
# generate tests for each tokenizer model
243243

@@ -343,6 +343,6 @@ def get_vocab_base_pre(self, tokenizer) -> str:
343343
for model in models:
344344
name = model["name"]
345345

346-
print(f"python3 convert-hf-to-gguf.py models/tokenizers/{name}/ --outfile models/ggml-vocab-{name}.gguf --vocab-only") # noqa: NP100
346+
print(f"python3 convert_hf_to_gguf.py models/tokenizers/{name}/ --outfile models/ggml-vocab-{name}.gguf --vocab-only") # noqa: NP100
347347

348348
logger.info("\n")

Diff for: docs/HOWTO-add-model.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ Also, it is important to check that the examples and main ggml backends (CUDA, M
1717
### 1. Convert the model to GGUF
1818

1919
This step is done in python with a `convert` script using the [gguf](https://pypi.org/project/gguf/) library.
20-
Depending on the model architecture, you can use either [convert-hf-to-gguf.py](../convert-hf-to-gguf.py) or [examples/convert-legacy-llama.py](../examples/convert-legacy-llama.py) (for `llama/llama2` models in `.pth` format).
20+
Depending on the model architecture, you can use either [convert_hf_to_gguf.py](../convert_hf_to_gguf.py) or [examples/convert_legacy_llama.py](../examples/convert_legacy_llama.py) (for `llama/llama2` models in `.pth` format).
2121

2222
The convert script reads the model configuration, tokenizer, tensor names+data and converts them to GGUF metadata and tensors.
2323

File renamed without changes.

Diff for: examples/json-schema-pydantic-example.py renamed to examples/json_schema_pydantic_example.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
# Usage:
22
#! ./llama-server -m some-model.gguf &
33
#! pip install pydantic
4-
#! python json-schema-pydantic-example.py
4+
#! python json_schema_pydantic_example.py
55

66
from pydantic import BaseModel, Extra, TypeAdapter
77
from annotated_types import MinLen

Diff for: examples/llava/MobileVLM-README.md

+7-7
Original file line numberDiff line numberDiff line change
@@ -30,34 +30,34 @@ git clone https://huggingface.co/mtgv/MobileVLM-1.7B
3030
git clone https://huggingface.co/openai/clip-vit-large-patch14-336
3131
```
3232

33-
2. Use `llava-surgery.py` to split the LLaVA model to LLaMA and multimodel projector constituents:
33+
2. Use `llava_surgery.py` to split the LLaVA model to LLaMA and multimodel projector constituents:
3434

3535
```sh
36-
python ./examples/llava/llava-surgery.py -m path/to/MobileVLM-1.7B
36+
python ./examples/llava/llava_surgery.py -m path/to/MobileVLM-1.7B
3737
```
3838

39-
3. Use `convert-image-encoder-to-gguf.py` with `--projector-type ldp` (for **V2** please use `--projector-type ldpv2`) to convert the LLaVA image encoder to GGUF:
39+
3. Use `convert_image_encoder_to_gguf.py` with `--projector-type ldp` (for **V2** please use `--projector-type ldpv2`) to convert the LLaVA image encoder to GGUF:
4040

4141
```sh
42-
python ./examples/llava/convert-image-encoder-to-gguf \
42+
python ./examples/llava/convert_image_encoder_to_gguf \
4343
-m path/to/clip-vit-large-patch14-336 \
4444
--llava-projector path/to/MobileVLM-1.7B/llava.projector \
4545
--output-dir path/to/MobileVLM-1.7B \
4646
--projector-type ldp
4747
```
4848

4949
```sh
50-
python ./examples/llava/convert-image-encoder-to-gguf \
50+
python ./examples/llava/convert_image_encoder_to_gguf \
5151
-m path/to/clip-vit-large-patch14-336 \
5252
--llava-projector path/to/MobileVLM-1.7B_V2/llava.projector \
5353
--output-dir path/to/MobileVLM-1.7B_V2 \
5454
--projector-type ldpv2
5555
```
5656

57-
4. Use `examples/convert-legacy-llama.py` to convert the LLaMA part of LLaVA to GGUF:
57+
4. Use `examples/convert_legacy_llama.py` to convert the LLaMA part of LLaVA to GGUF:
5858

5959
```sh
60-
python ./examples/convert-legacy-llama.py path/to/MobileVLM-1.7B
60+
python ./examples/convert_legacy_llama.py path/to/MobileVLM-1.7B
6161
```
6262

6363
5. Use `quantize` to convert LLaMA part's DataType from `fp16` to `q4_k`

Diff for: examples/llava/README.md

+10-10
Original file line numberDiff line numberDiff line change
@@ -38,22 +38,22 @@ git clone https://huggingface.co/openai/clip-vit-large-patch14-336
3838
pip install -r examples/llava/requirements.txt
3939
```
4040

41-
3. Use `llava-surgery.py` to split the LLaVA model to LLaMA and multimodel projector constituents:
41+
3. Use `llava_surgery.py` to split the LLaVA model to LLaMA and multimodel projector constituents:
4242

4343
```sh
44-
python ./examples/llava/llava-surgery.py -m ../llava-v1.5-7b
44+
python ./examples/llava/llava_surgery.py -m ../llava-v1.5-7b
4545
```
4646

47-
4. Use `convert-image-encoder-to-gguf.py` to convert the LLaVA image encoder to GGUF:
47+
4. Use `convert_image_encoder_to_gguf.py` to convert the LLaVA image encoder to GGUF:
4848

4949
```sh
50-
python ./examples/llava/convert-image-encoder-to-gguf.py -m ../clip-vit-large-patch14-336 --llava-projector ../llava-v1.5-7b/llava.projector --output-dir ../llava-v1.5-7b
50+
python ./examples/llava/convert_image_encoder_to_gguf.py -m ../clip-vit-large-patch14-336 --llava-projector ../llava-v1.5-7b/llava.projector --output-dir ../llava-v1.5-7b
5151
```
5252

53-
5. Use `examples/convert-legacy-llama.py` to convert the LLaMA part of LLaVA to GGUF:
53+
5. Use `examples/convert_legacy_llama.py` to convert the LLaMA part of LLaVA to GGUF:
5454

5555
```sh
56-
python ./examples/convert-legacy-llama.py ../llava-v1.5-7b --skip-unknown
56+
python ./examples/convert_legacy_llama.py ../llava-v1.5-7b --skip-unknown
5757
```
5858

5959
Now both the LLaMA part and the image encoder are in the `llava-v1.5-7b` directory.
@@ -70,9 +70,9 @@ git clone https://huggingface.co/liuhaotian/llava-v1.6-vicuna-7b
7070
pip install -r examples/llava/requirements.txt
7171
```
7272

73-
3) Use `llava-surgery-v2.py` which also supports llava-1.5 variants pytorch as well as safetensor models:
73+
3) Use `llava_surgery_v2.py` which also supports llava-1.5 variants pytorch as well as safetensor models:
7474
```console
75-
python examples/llava/llava-surgery-v2.py -C -m ../llava-v1.6-vicuna-7b/
75+
python examples/llava/llava_surgery_v2.py -C -m ../llava-v1.6-vicuna-7b/
7676
```
7777
- you will find a llava.projector and a llava.clip file in your model directory
7878

@@ -86,13 +86,13 @@ curl -s -q https://huggingface.co/cmp-nct/llava-1.6-gguf/raw/main/config_vit.jso
8686

8787
5) Create the visual gguf model:
8888
```console
89-
python ./examples/llava/convert-image-encoder-to-gguf.py -m vit --llava-projector vit/llava.projector --output-dir vit --clip-model-is-vision
89+
python ./examples/llava/convert_image_encoder_to_gguf.py -m vit --llava-projector vit/llava.projector --output-dir vit --clip-model-is-vision
9090
```
9191
- This is similar to llava-1.5, the difference is that we tell the encoder that we are working with the pure vision model part of CLIP
9292

9393
6) Then convert the model to gguf format:
9494
```console
95-
python ./examples/convert-legacy-llama.py ../llava-v1.6-vicuna-7b/ --skip-unknown
95+
python ./examples/convert_legacy_llama.py ../llava-v1.6-vicuna-7b/ --skip-unknown
9696
```
9797

9898
7) And finally we can run the llava cli using the 1.6 model version:
File renamed without changes.
File renamed without changes.

Diff for: examples/llava/requirements.txt

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,3 @@
1-
-r ../../requirements/requirements-convert-legacy-llama.txt
1+
-r ../../requirements/requirements-convert_legacy_llama.txt
22
pillow~=10.2.0
33
torch~=2.2.1
File renamed without changes.
File renamed without changes.

Diff for: gguf-py/README.md

+5-5
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
This is a Python package for writing binary files in the [GGUF](https://github.com/ggerganov/ggml/pull/302)
44
(GGML Universal File) format.
55

6-
See [convert-llama-hf-to-gguf.py](https://github.com/ggerganov/llama.cpp/blob/master/convert-hf-to-gguf.py)
6+
See [convert_hf_to_gguf.py](https://github.com/ggerganov/llama.cpp/blob/master/convert_hf_to_gguf.py)
77
as an example for its usage.
88

99
## Installation
@@ -15,13 +15,13 @@ pip install gguf
1515

1616
[examples/writer.py](https://github.com/ggerganov/llama.cpp/blob/master/gguf-py/examples/writer.py) — Generates `example.gguf` in the current directory to demonstrate generating a GGUF file. Note that this file cannot be used as a model.
1717

18-
[scripts/gguf-dump.py](https://github.com/ggerganov/llama.cpp/blob/master/gguf-py/scripts/gguf-dump.py) — Dumps a GGUF file's metadata to the console.
18+
[scripts/gguf_dump.py](https://github.com/ggerganov/llama.cpp/blob/master/gguf-py/scripts/gguf_dump.py) — Dumps a GGUF file's metadata to the console.
1919

20-
[scripts/gguf-set-metadata.py](https://github.com/ggerganov/llama.cpp/blob/master/gguf-py/scripts/gguf-set-metadata.py) — Allows changing simple metadata values in a GGUF file by key.
20+
[scripts/gguf_set_metadata.py](https://github.com/ggerganov/llama.cpp/blob/master/gguf-py/scripts/gguf_set_metadata.py) — Allows changing simple metadata values in a GGUF file by key.
2121

22-
[scripts/gguf-convert-endian.py](https://github.com/ggerganov/llama.cpp/blob/master/gguf-py/scripts/gguf-convert-endian.py) — Allows converting the endianness of GGUF files.
22+
[scripts/gguf_convert_endian.py](https://github.com/ggerganov/llama.cpp/blob/master/gguf-py/scripts/gguf_convert_endian.py) — Allows converting the endianness of GGUF files.
2323

24-
[scripts/gguf-new-metadata.py](https://github.com/ggerganov/llama.cpp/blob/master/gguf-py/scripts/gguf-new-metadata.py) — Copies a GGUF file with added/modified/removed metadata values.
24+
[scripts/gguf_new_metadata.py](https://github.com/ggerganov/llama.cpp/blob/master/gguf-py/scripts/gguf_new_metadata.py) — Copies a GGUF file with added/modified/removed metadata values.
2525

2626
## Development
2727
Maintainers who participate in development of this package are advised to install it in editable mode:

Diff for: gguf-py/scripts/__init__.py

+4-13
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,4 @@
1-
import os
2-
3-
from importlib import import_module
4-
5-
6-
os.environ["NO_LOCAL_GGUF"] = "TRUE"
7-
8-
gguf_convert_endian_entrypoint = import_module("scripts.gguf-convert-endian").main
9-
gguf_dump_entrypoint = import_module("scripts.gguf-dump").main
10-
gguf_set_metadata_entrypoint = import_module("scripts.gguf-set-metadata").main
11-
gguf_new_metadata_entrypoint = import_module("scripts.gguf-new-metadata").main
12-
13-
del import_module, os
1+
from .gguf_convert_endian import main as gguf_convert_endian_entrypoint
2+
from .gguf_dump import main as gguf_dump_entrypoint
3+
from .gguf_set_metadata import main as gguf_set_metadata_entrypoint
4+
from .gguf_new_metadata import main as gguf_new_metadata_entrypoint
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.

Diff for: requirements.txt

+1-1
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
# Package versions must stay compatible across all top-level python scripts.
55
#
66

7-
-r ./requirements/requirements-convert-legacy-llama.txt
7+
-r ./requirements/requirements-convert_legacy_llama.txt
88

99
-r ./requirements/requirements-convert_hf_to_gguf.txt
1010
-r ./requirements/requirements-convert_hf_to_gguf_update.txt

Diff for: requirements/requirements-convert_hf_to_gguf.txt

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,2 @@
1-
-r ./requirements-convert-legacy-llama.txt
1+
-r ./requirements-convert_legacy_llama.txt
22
torch~=2.2.1
+1-1
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,2 @@
1-
-r ./requirements-convert-legacy-llama.txt
1+
-r ./requirements-convert_legacy_llama.txt
22
torch~=2.2.1
+1-1
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
-r ./requirements-convert-legacy-llama.txt
1+
-r ./requirements-convert_legacy_llama.txt

0 commit comments

Comments
 (0)