private-ai-resources
Secure ML repo
A curated collection of resources and libraries for secure machine learning research and development
SOON TO BE DEPRECATED - Private machine learning progress
470 stars
45 watching
98 forks
last commit: almost 5 years ago
Linked from 1 awesome list
writing
Related projects:
Repository | Description | Stars |
---|---|---|
| An effort to develop and evaluate private machine learning frameworks | 19 |
| Enables secure, on-device machine learning training and inference for Android devices using PySyft models | 86 |
| Secure Linear Regression in the Semi-Honest Two-Party Setting. | 37 |
| A Python library for training machine learning models while preserving the privacy of sensitive data | 1,947 |
| An online repository providing resources and information on explainable AI, algorithmic fairness, ML security, and related topics | 107 |
| A framework for training neural networks on vertically partitioned data while preserving user privacy through secure set intersection. | 215 |
| An open-ended critical reading list and resource collection on the sociotechnical implications of AI/ML for engineers, scientists, designers, policy makers, and the public. | 366 |
| A framework for applying secure computing techniques to machine learning models without modifying the underlying frameworks. | 1,554 |
| An AI classifier designed to determine whether text is written by humans or machines. | 122 |
| An open-source project that explores the intersection of machine learning and security to develop tools for detecting vulnerabilities in web applications. | 1,987 |
| A personal AI operating system integrating various AI modules and agents for automation and productivity | 1,720 |
| A specification for an API providing access to AI capabilities | 1,332 |
| Next-generation email project aiming to address common security and usability issues through experimentation with various technologies | 473 |
| Verifies the accuracy of a private machine learning model on Ethereum using a zk-SNARK proof | 214 |
| Evaluates the confidentiality of Large Language Models integrated with external tools and services | 30 |