mace
Mobile AI Compute Engine
A framework for deep learning inference on mobile devices
MACE is a deep learning inference framework optimized for mobile heterogeneous computing platforms.
5k stars
229 watching
819 forks
Language: C++
last commit: 8 months ago
Linked from 1 awesome list
deep-learninghvxmachine-learningneonneural-networkopencl
Related projects:
Repository | Description | Stars |
---|---|---|
| An open machine learning framework for building classical, deep, or hybrid models on various hardware platforms. | 5,555 |
| A low-code framework for building custom deep learning models and neural networks | 11,236 |
| An efficient Large Language Model inference engine leveraging consumer-grade GPUs on PCs | 8,011 |
| A framework that automatically compresses and accelerates deep learning models to make them suitable for mobile devices with limited computational resources. | 2,787 |
| A toolset for deploying deep learning models on various devices and platforms | 2,797 |
| A platform providing pre-built machine learning models and APIs for cross-platform deployment on various devices | 27,962 |
| A Python library designed to accelerate model inference with high-throughput and low latency capabilities | 1,924 |
| A lightweight deep learning framework developed by Alibaba for efficient inference and training of neural networks on-device. | 8,824 |
| A cross-platform machine learning framework for .NET that enables developers to build, train, and deploy models without prior expertise in ML. | 9,071 |
| A lightweight machine learning inference framework built on Tensorflow optimized for Arm targets. | 1,742 |
| An experimental software framework to run AI models on diverse devices without requiring expensive GPUs. | 17,369 |
| A high-level Java framework for building and deploying deep learning models | 4,204 |
| A toolkit for easy and high-performance deployment of deep learning models on various hardware platforms | 3,034 |
| A toolkit for optimizing and deploying artificial intelligence models in various applications | 7,439 |
| Generates large language model outputs in high-throughput mode on single GPUs | 9,236 |