ml_privacy_meter
Privacy Auditor
An auditing tool to assess the privacy risks of machine learning models
Privacy Meter: An open-source library to audit data privacy in statistical and machine learning algorithms.
613 stars
19 watching
103 forks
Language: Python
last commit: 3 months ago data-privacydata-protectiondata-protection-impact-assessmentexplainable-aigdprinferenceinformation-leakagemachine-learningmembership-inference-attackprivacyprivacy-audit
Related projects:
Repository | Description | Stars |
---|---|---|
| A Python library for training machine learning models while preserving the privacy of sensitive data | 1,947 |
| This project investigates how bias can be introduced and spread in machine learning models during federated learning, and aims to detect and mitigate this issue. | 11 |
| A Python package for measuring memorization in Large Language Models. | 126 |
| A Python implementation of a tool designed to provide anonymity and privacy in software development and data analysis. | 1,844 |
| A platform for private benchmarking of machine learning models with different trust levels. | 7 |
| Evaluates and compares the performance of multimodal large language models on various tasks | 56 |
| Implementing an SVM model to make predictions on encrypted data while preserving the client's privacy | 1 |
| A software package for auditing and analyzing machine learning models to detect unfair biases | 130 |
| An open-source implementation of a federated learning framework to protect data privacy in embodied agent learning for Vision-and-Language Navigation. | 13 |
| An integration client providing real-time monitoring and analysis of OpenAI API usage | 93 |
| A curated collection of resources and libraries for secure machine learning research and development | 470 |
| An auditing toolbox to assess the fairness of black-box predictive models | 361 |
| Secure Linear Regression in the Semi-Honest Two-Party Setting. | 37 |
| Teaching software developers how to build transparent and explainable machine learning models using Python | 673 |
| A dataset and benchmarking framework to evaluate the performance of question answering models on detecting and mitigating social biases. | 92 |