-
Notifications
You must be signed in to change notification settings - Fork 195
Issues: huggingface/lighteval
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
[BUG] latest release not working and causes unexpected keyword argument
bug
Something isn't working
#612
opened Mar 14, 2025 by
maziyarpanahi
[FT] endpoint calling API in parallel is in conflict with rate limiting
feature request
New feature/request
#608
opened Mar 10, 2025 by
mapmeld
[FT] Support local datasets
feature request
New feature/request
#604
opened Mar 5, 2025 by
anilravuri
[BUG] AttributeError: 'torch_dtype' Not Found in VLLMModelConfig When dtype Not Specified
bug
Something isn't working
#601
opened Mar 4, 2025 by
fabfish
[EVAL]Which version of LightEval evaluates AIME 24, AIME 25, LiveCodeBench, and Distill R1 model?
new task
#596
opened Mar 3, 2025 by
wccccp
[BUG] Cannot import lighteval.pipeline using vllm
bug
Something isn't working
#595
opened Mar 2, 2025 by
HaoZhongkai
[FT] The use of "--save-details" setting for displaying the generated contents and answers
feature request
New feature/request
#594
opened Feb 28, 2025 by
zwxandy
[BUG] Unable to run OALL V2 tasks
bug
Something isn't working
#592
opened Feb 28, 2025 by
gokulr-cerebras
[FT] Assistant Response Prefilling
feature request
New feature/request
#591
opened Feb 28, 2025 by
matmult
Sth wrong in the parser for Something isn't working
generation_parameters
in main_sglang.py
bug
#590
opened Feb 27, 2025 by
HancCui
[CORE] Make parsing of model args and generation parameters robust
prio
#579
opened Feb 21, 2025 by
lewtun
[FT] Propagate batch size control for vLLM backend
feature request
New feature/request
#573
opened Feb 18, 2025 by
alvin319
[FT] LiteLLM concurrency parameters hard-coded
feature request
New feature/request
#567
opened Feb 16, 2025 by
lhl
[BUG] error setting tokenizer with custom generation params for vllm
bug
Something isn't working
#563
opened Feb 14, 2025 by
rawsh
[BUG] Nanotron runner imports non-existant
bug
Something isn't working
#555
opened Feb 13, 2025 by
jquesnelle
how can i use this "community|alghafa:meta_ar_dialects " as a task
#554
opened Feb 12, 2025 by
pratim808
[BUG] tiktoken is not optional dependency
bug
Something isn't working
#546
opened Feb 8, 2025 by
hynky1999
Previous Next
ProTip!
Adding no:label will show everything without a label.