Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: PytorchStreamReader failed reading zip archive: not a ZIP archive #110

Closed
RoversX opened this issue Mar 14, 2023 · 2 comments
Labels
model Model specific

Comments

@RoversX
Copy link

RoversX commented Mar 14, 2023

Hello, I try to # convert the 7B model to ggml FP16 format but I found a problem? Is that because of the model problem? 🙏🏻

python3 convert-pth-to-ggml.py models/7B/ 1
.
├── CMakeLists.txt
├── LICENSE
├── Makefile
├── README.md
├── convert-pth-to-ggml.py
├── ggml.c
├── ggml.h
├── ggml.o
├── main
├── main.cpp
├── models
│   ├── 7B
│   │   ├── checklist.chk
│   │   ├── consolidated.00.pth
│   │   └── params.json
│   ├── tokenizer.model
│   └── tokenizer_checklist.chk
├── quantize
├── quantize.cpp
├── quantize.sh
├── utils.cpp
├── utils.h
└── utils.o
(Lab2) @-MacBook-Pro llama.cpp % python convert-pth-to-ggml.py models/7B/ 1
{'dim': 4096, 'multiple_of': 256, 'n_heads': 32, 'n_layers': 32, 'norm_eps': 1e-06, 'vocab_size': 32000}
n_parts =  1
Processing part  0
Traceback (most recent call last):
  File "Lab2/llama.cpp/convert-pth-to-ggml.py", line 88, in <module>
    model = torch.load(fname_model, map_location="cpu")
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Lab2/lib/python3.11/site-packages/torch/serialization.py", line 799, in load
    with _open_zipfile_reader(opened_file) as opened_zipfile:
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Lab2/lib/python3.11/site-packages/torch/serialization.py", line 285, in __init__
    super().__init__(torch._C.PyTorchFileReader(name_or_buffer))
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: PytorchStreamReader failed reading zip archive: not a ZIP archive
(Lab2) @-MacBook-Pro llama.cpp % 
@wyy912
Copy link

wyy912 commented Mar 14, 2023

Same issue, and I fix it by redownload the 7B model.

meta-llama/llama#73

First time, I download by ipfs, and second time, I download by magnet, the problem was gone.

Hope it helpful

@RoversX
Copy link
Author

RoversX commented Mar 14, 2023

Same issue, and I fix it by redownload the 7B model.

facebookresearch/llama#73

First time, I download by ipfs, and second time, I download by magnet, the problem was gone.

Hope it helpful

Wow, Thank you! It actually work!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
model Model specific
Projects
None yet
Development

No branches or pull requests

3 participants