Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Parse string values for add_special_tokens in vLLM #598

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

eldarkurtic
Copy link
Contributor

When trying to evaluate reasoning models on e.g. AIME task, we would run evals like this:

MODEL=deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
MODEL_ARGS="pretrained=$MODEL,dtype=bfloat16,max_model_length=32768,gpu_memory_utilization=0.8,generation_parameters={max_new_tokens:32768,temperature:0.6,top_p:0.95}"
OUTPUT_DIR=data/evals/$MODEL

# AIME 2024
TASK=aime24
lighteval vllm $MODEL_ARGS "custom|$TASK|0|0" \
    --custom-tasks src/open_r1/evaluate.py \
    --use-chat-template \
    --output-dir $OUTPUT_DIR

As part of MODEL_ARGS we would like to be able to specify value for add_special_tokens to control whether BOS token is added or not by tokenizer. At the moment this is not possible because the given value will be interpreted as string whereas the codebase relies on boolean. This PR enables this by parsing add_special_tokens like other vLLM params are parsed in the class VLLMModel.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant