llmperf
LLM benchmarking tool
A tool for evaluating the performance of large language model APIs
LLMPerf is a library for validating and benchmarking LLMs
678 stars
9 watching
115 forks
Language: Python
last commit: 4 months ago
Linked from 1 awesome list
Related projects:
Repository | Description | Stars |
---|---|---|
| A benchmarking framework for large language models | 81 |
| A benchmark for evaluating large language models in multiple languages and formats | 93 |
| A framework for managing and testing large language models to evaluate their performance and optimize user experiences. | 451 |
| A Python web framework specifically designed to build LLM microservices with built-in support for FastAPI and streaming capabilities. | 978 |
| Provides a comprehensive framework for evaluating Large Language Model (LLM) applications and pipelines with customizable metrics | 455 |
| A high-performance LLM written in Python/Jax for training and inference on Google Cloud TPUs and GPUs. | 1,557 |
| An open-source implementation of a vision-language instructed large language model | 513 |
| Library that provides a unified API to interact with various Large Language Models (LLMs) | 367 |
| Measures the performance of deep learning models in various deployment scenarios. | 1,256 |
| A unified framework for scaling AI and Python applications by providing a distributed runtime and a set of libraries for machine learning and other compute tasks. | 34,412 |
| An open-source benchmarking framework for evaluating cross-style visual capability of large multimodal models | 84 |
| A framework for leveraging language models in production code | 216 |
| Evaluates and benchmarks multimodal language models' ability to process visual, acoustic, and textual inputs simultaneously. | 15 |
| An open-source toolkit for building and evaluating large language models | 267 |
| A lightweight framework for building agent-based applications using LLMs and transformer architectures | 1,924 |