onnxruntime
ML accelerator
A cross-platform, high-performance machine learning accelerator
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
15k stars
248 watching
3k forks
Language: C++
last commit: about 1 month ago
Linked from 3 awesome lists
ai-frameworkdeep-learninghardware-accelerationmachine-learningneural-networksonnxpytorchscikit-learntensorflow
Related projects:
Repository | Description | Stars |
---|---|---|
microsoft/onnxruntime-inference-examples | Repository providing examples for using ONNX Runtime (ORT) to perform machine learning inferencing. | 1,243 |
onnx/onnx | Enables interoperability between different machine learning frameworks by providing an open standard format for AI models | 18,098 |
microsoft/onnxruntime-training-examples | Accelerates training of large transformer models by providing optimized kernels and memory optimizations. | 317 |
emergentorder/onnx-scala | An API and backend for running ONNX models in Scala 3 using typeful, functional deep learning and classical machine learning. | 138 |
dotnet/machinelearning | A cross-platform machine learning framework for .NET that enables developers to build, train, and deploy models without prior expertise in ML. | 9,071 |
microsoft/cntk | A unified deep learning toolkit that describes neural networks as a series of computational steps via a directed graph. | 17,534 |
xboot/libonnx | An onnx inference engine for embedded devices with hardware acceleration support | 589 |
kraiskil/onnx2c | Generates C code from ONNX files for efficient neural network inference on microcontrollers | 234 |
microsoft/mmdnn | A toolset to convert and manage deep learning models across multiple frameworks. | 5,802 |
alrevuelta/connxr | An embedded device-friendly C ONNX runtime with zero dependencies | 196 |
microsoft/deepspeed | A deep learning optimization library that simplifies distributed training and inference on modern computing hardware. | 35,863 |
triton-inference-server/server | An open-source software that enables deployment of AI models from multiple deep learning and machine learning frameworks on various devices | 8,460 |
tensorflow/serving | A high-performance serving system for machine learning models in production environments. | 6,195 |
microsoft/lightgbm | A high-performance gradient boosting framework for machine learning tasks | 16,769 |
xiaomi/mace | A framework for deep learning inference on mobile devices | 4,949 |