CAFE
Data protection mechanism
An approach to prevent data leakage in distributed machine learning models by shielding sensitive information during the training process.
21 stars
1 watching
6 forks
Language: Python
last commit: about 3 years ago Related projects:
Repository | Description | Stars |
---|---|---|
git-disl/lockdown | A backdoor defense system against attacks in federated learning algorithms used for machine learning model training on distributed datasets. | 14 |
dcalab-unipv/turning-privacy-preserving-mechanisms-against-federated-learning | This project presents an attack on federated learning systems to compromise their privacy-preserving mechanisms. | 8 |
eric-ai-lab/fedvln | An open-source implementation of a federated learning framework to protect data privacy in embodied agent learning for Vision-and-Language Navigation. | 13 |
mithril-security/bastionlab | Enables secure data collaboration between data owners and scientists without exposing original data. | 170 |
lpomfrey/django-debreach | Protects against a specific web-based attack by modifying the length of HTML responses. | 75 |
ai-secure/dba | A tool for demonstrating and analyzing attacks on federated learning systems by introducing backdoors into distributed machine learning models. | 176 |
mbilalzafar/fair-classification | Provides a Python implementation of fairness mechanisms in classification models to mitigate disparate impact and mistreatment. | 189 |
tf-encrypted/moose | A secure distributed dataflow framework for encrypted machine learning and data processing | 59 |
safe-graph/dgfraud | A toolbox for building and comparing graph neural network-based fraud detection models | 693 |
cossacklabs/acra | Database security suite with field-level encryption, search through encrypted data, SQL injections prevention and intrusion detection capabilities. | 1,357 |
directdefense/superserial | A Burp Suite Extender to identify Java Deserialization vulnerabilities in client requests and server responses. | 9 |
jeremy313/soteria | An implementation of a defense against model inversion attacks in federated learning | 55 |
dalmatinerdb/dproto | A protocol defining data exchange formats for a specific relational database system. | 1 |
protectai/llm-guard | A security toolkit designed to protect interactions with large language models from various threats and vulnerabilities. | 1,242 |
ybdai7/chameleon-durable-backdoor | A federated learning system implementation that enables planting durable backdoors in global models by adapting to peer images. | 32 |