Change the repository type filter
All
Repositories list
106 repositories
- This repository contains automation scripts designed to run MLPerf Inference benchmarks. Originally developed for the Collective Mind (CM) automation framework, these scripts have been adapted to leverage the MLC automation framework, maintained by the MLCommons Benchmark Infrastructure Working Group.
croissant
PublicCroissant is a high-level format for machine learning datasets that brings together four rich layers.inference_results_v5.0
PublicThis repository contains the results and code for the MLPerf™ Inference v5.0 benchmark.tiny
Publicmobile_app_open
Publicailuminate
PublicGaNDLF
PublicA generalizable application framework for segmentation, regression, and classification using PyTorchtraining
Publiccm4mlperf-results
PublicCM interface and automation recipes to analyze MLPerf Inference, Tiny and Training results. The goal is to make it easier for the community to visualize, compare and reproduce MLPerf results and add derived metrics such as Performance/Watt or Performance/$power-dev
Publicabtf-ssd-pytorch
Publicpolicies
Publicinference_results_visualization_template
Public templatemlcflow
PublicMLCFlow: Simplifying MLPerf Automationsstorage
PublicMLPerf™ Storage Benchmark Suitesubmissions_algorithms
Publicmobile_results_v5.0
Publicck
PublicCollective Knowledge (CK), Collective Mind (CM/CMX) and MLPerf automations: community-driven projects to facilitate collaborative and reproducible research and to learn how to run AI, ML, and other emerging workloads more efficiently and cost-effectively across diverse models, datasets, software, and hardware using MLPerf methodology and benchmarksGaNDLF-Synth
Publicchakra
Public