You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
InternalError: Traceback (most recent call last):
2: operator()
at /workspace/mlc-llm/cpp/tokenizers/tokenizers.cc:459
1: mlc::llm::Tokenizer::FromPath(tvm::runtime::String const&, std::optional<mlc::llm::TokenizerInfo>)
at /workspace/mlc-llm/cpp/tokenizers/tokenizers.cc:140
0: mlc::llm::Tokenizer::DetectTokenizerInfo(tvm::runtime::String const&)
at /workspace/mlc-llm/cpp/tokenizers/tokenizers.cc:210
File "/workspace/mlc-llm/cpp/tokenizers/tokenizers.cc", line 210
InternalError: Check failed: (err.empty()) is false: Failed to parse JSON: syntax error at line 1 near: version https://git-lfs.github.com/spec/v1
Expected behavior
Model should be able to load correctly, without errors.
Environment
Platform (e.g. WebGPU/Vulkan/IOS/Android/CUDA): all platforms (tested CPU and CUDA)
Operating system (e.g. Ubuntu/Windows/MacOS/...): Linux and Windows
TVM Unity Hash Tag (python -c "import tvm; print('\n'.join(f'{k}: {v}' for k, v in tvm.support.libinfo().items()))", applicable if you compile models): not relevant
Any other relevant information: None
Thank you!
The text was updated successfully, but these errors were encountered:
🐛 Bug
It looks like all supported Gemma 2 models are failing right now.
To Reproduce
Fails with:
Expected behavior
Model should be able to load correctly, without errors.
Environment
conda
, source): pippip
, source): pippython -c "import tvm; print('\n'.join(f'{k}: {v}' for k, v in tvm.support.libinfo().items()))"
, applicable if you compile models): not relevantThank you!
The text was updated successfully, but these errors were encountered: