libonnx
Inference engine
An onnx inference engine for embedded devices with hardware acceleration support
A lightweight, portable pure C99 onnx inference engine for embedded devices with hardware acceleration support.
589 stars
28 watching
108 forks
Language: C
last commit: 4 months ago
Linked from 1 awesome list
aibaremetalcdedeep-neural-networksdeep-learningembeddedembedded-systemshardware-accelerationinferencelibrarylightweightmachine-learningneural-networkonnxportable
Related projects:
Repository | Description | Stars |
---|---|---|
| Fast and scalable neural network inference framework for FPGAs. | 770 |
| A machine learning inference engine designed to be portable and efficient for embedded systems with minimal dependencies. | 530 |
| An embedded device-friendly C ONNX runtime with zero dependencies | 196 |
| A project providing onnx models and tools for inference with LLaMa transformer model on various devices | 356 |
| Repository providing examples for using ONNX Runtime (ORT) to perform machine learning inferencing. | 1,243 |
| Provides a lightweight neural network inferencing engine for real-time systems | 614 |
| A Python library designed to accelerate model inference with high-throughput and low latency capabilities | 1,924 |
| Generates C code from ONNX files for efficient neural network inference on microcontrollers | 234 |
| An API and backend for running ONNX models in Scala 3 using typeful, functional deep learning and classical machine learning. | 138 |
| Designs and deploys neural networks integrated with Xilinx FPGAs for high-throughput applications | 83 |
| A lightweight machine learning inference framework built on Tensorflow optimized for Arm targets. | 1,742 |
| An abstraction layer for building and running neural networks on iOS using MetalPerformanceShaders and pre-trained models. | 1,798 |
| Direct neural architecture search on target task and hardware for efficient model deployment | 1,429 |
| A deep learning method for optimizing convolutional neural networks by reducing computational cost while improving regularization and inference efficiency. | 18 |
| A toolbox to help understand neural networks' predictions by providing different analysis methods and a common interface. | 1,271 |