ml_privacy_meter

Privacy Auditor

An auditing tool to assess the privacy risks of machine learning models

Privacy Meter: An open-source library to audit data privacy in statistical and machine learning algorithms.

GitHub

613 stars
19 watching
103 forks
Language: Python
last commit: 3 months ago
data-privacydata-protectiondata-protection-impact-assessmentexplainable-aigdprinferenceinformation-leakagemachine-learningmembership-inference-attackprivacyprivacy-audit

Related projects:

Repository Description Stars
tensorflow/privacy A Python library for training machine learning models while preserving the privacy of sensitive data 1,947
privacytrustlab/bias_in_fl This project investigates how bias can be introduced and spread in machine learning models during federated learning, and aims to detect and mitigate this issue. 11
iamgroot42/mimir A Python package for measuring memorization in Large Language Models. 126
h1r0gh057/anonymous A Python implementation of a tool designed to provide anonymity and privacy in software development and data analysis. 1,844
microsoft/private-benchmarking A platform for private benchmarking of machine learning models with different trust levels. 7
freedomintelligence/mllm-bench Evaluates and compares the performance of multimodal large language models on various tasks 56
abhinav-bohra/privacy-preserving-ml Implementing an SVM model to make predictions on encrypted data while preserving the client's privacy 1
algofairness/blackboxauditing A software package for auditing and analyzing machine learning models to detect unfair biases 130
eric-ai-lab/fedvln An open-source implementation of a federated learning framework to protect data privacy in embodied agent learning for Vision-and-Language Navigation. 13
monalabs/mona-openai An integration client providing real-time monitoring and analysis of OpenAI API usage 93
openmined/private-ai-resources A curated collection of resources and libraries for secure machine learning research and development 470
adebayoj/fairml An auditing toolbox to assess the fairness of black-box predictive models 361
shreya-28/secure-ml Secure Linear Regression in the Semi-Honest Two-Party Setting. 37
jphall663/interpretable_machine_learning_with_python Teaching software developers how to build transparent and explainable machine learning models using Python 673
nyu-mll/bbq A dataset and benchmarking framework to evaluate the performance of question answering models on detecting and mitigating social biases. 92