We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hello, I try to # convert the 7B model to ggml FP16 format but I found a problem? Is that because of the model problem? 🙏🏻
python3 convert-pth-to-ggml.py models/7B/ 1
. ├── CMakeLists.txt ├── LICENSE ├── Makefile ├── README.md ├── convert-pth-to-ggml.py ├── ggml.c ├── ggml.h ├── ggml.o ├── main ├── main.cpp ├── models │ ├── 7B │ │ ├── checklist.chk │ │ ├── consolidated.00.pth │ │ └── params.json │ ├── tokenizer.model │ └── tokenizer_checklist.chk ├── quantize ├── quantize.cpp ├── quantize.sh ├── utils.cpp ├── utils.h └── utils.o
(Lab2) @-MacBook-Pro llama.cpp % python convert-pth-to-ggml.py models/7B/ 1 {'dim': 4096, 'multiple_of': 256, 'n_heads': 32, 'n_layers': 32, 'norm_eps': 1e-06, 'vocab_size': 32000} n_parts = 1 Processing part 0 Traceback (most recent call last): File "Lab2/llama.cpp/convert-pth-to-ggml.py", line 88, in <module> model = torch.load(fname_model, map_location="cpu") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "Lab2/lib/python3.11/site-packages/torch/serialization.py", line 799, in load with _open_zipfile_reader(opened_file) as opened_zipfile: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "Lab2/lib/python3.11/site-packages/torch/serialization.py", line 285, in __init__ super().__init__(torch._C.PyTorchFileReader(name_or_buffer)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: PytorchStreamReader failed reading zip archive: not a ZIP archive (Lab2) @-MacBook-Pro llama.cpp %
The text was updated successfully, but these errors were encountered:
Same issue, and I fix it by redownload the 7B model.
meta-llama/llama#73
First time, I download by ipfs, and second time, I download by magnet, the problem was gone.
Hope it helpful
Sorry, something went wrong.
Same issue, and I fix it by redownload the 7B model. facebookresearch/llama#73 First time, I download by ipfs, and second time, I download by magnet, the problem was gone. Hope it helpful
facebookresearch/llama#73
Wow, Thank you! It actually work!
No branches or pull requests
Hello, I try to # convert the 7B model to ggml FP16 format but I found a problem? Is that because of the model problem? 🙏🏻
The text was updated successfully, but these errors were encountered: