inference
Model benchmarking suite
Measures the performance of deep learning models in various deployment scenarios.
Reference implementations of MLPerf™ inference benchmarks
1k stars
57 watching
538 forks
Language: Python
last commit: 2 months ago
Linked from 1 awesome list
benchmarkmachine-learning
Related projects:
Repository | Description | Stars |
---|---|---|
| A benchmark for evaluating machine learning algorithms' performance on large datasets | 1,874 |
| A platform for private benchmarking of machine learning models with different trust levels. | 7 |
| A benchmarking project comparing performance of different machine learning inference frameworks and models on Go platform | 30 |
| A framework for hosting and training machine learning models on a blockchain, enabling secure sharing and prediction without requiring users to pay for data or model updates. | 559 |
| Evaluates and compares the performance of multimodal large language models on various tasks | 56 |
| A lightweight machine learning inference framework built on Tensorflow optimized for Arm targets. | 1,742 |
| A benchmark for evaluating large language models' ability to process multimodal input | 322 |
| A high-performance ML model serving framework | 802 |
| Automates the end-to-end machine learning workflow from code commit to model deployment | 18 |
| Provides benchmarking functions for mathematical optimization algorithms | 67 |
| A lightweight MLOps library for small teams and individuals to manage machine learning model development lifecycle | 22 |
| Comparative benchmarks of various machine learning algorithms | 169 |
| Performs performance benchmarking of various .NET mocking libraries | 22 |
| An open-source benchmarking framework for evaluating cross-style visual capability of large multimodal models | 84 |
| An authoritative source of real-world benchmarks for Python implementations. | 877 |