You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
### Note: DO NOT use quantized model or quantization_bit when merging lora adapters
### model
model_name_or_path: /root/modelzoo/DeepSeek-R1-BF16
adapter_name_or_path: saves/deepseek-r1
template: deepseek3
trust_remote_code: true
### export
export_dir: output/DeepSeek-R1-SFT
export_size: 5
export_device: cpu
export_legacy_format: false
Reminder
System Info
llamafactory
version: 0.9.2.dev0Reproduction
我尝试了
结果报错

2. 使用 llamafactory export 合并lora权重,下面是我使用的配置文件
但是速度比较慢,我想问下这里的 export_size=5 的具体含义,以及部署训练后 R1 的正确方法。
Others
No response
The text was updated successfully, but these errors were encountered: