FILM

Language model attack

Recovering Private Text in Federated Learning of Language Models by attacking language models to extract private client data

Official repo for the paper: Recovering Private Text in Federated Learning of Language Models (in NeurIPS 2022)

GitHub

57 stars
4 watching
7 forks
Language: Python
last commit: almost 2 years ago

Related projects:

Repository Description Stars
dcalab-unipv/turning-privacy-preserving-mechanisms-against-federated-learning This project presents an attack on federated learning systems to compromise their privacy-preserving mechanisms. 8
jeremy313/fl-wbc A defense mechanism against model poisoning attacks in federated learning 37
lhfowl/robbing_the_fed This implementation allows an attacker to directly obtain user data from federated learning gradient updates by modifying the shared model architecture. 23
ftramer/steal-ml A tool for extracting machine learning models from cloud-based services using prediction APIs 344
hfzhang31/a3fl A framework for attacking federated learning systems with adaptive backdoor attacks 23
git-disl/lockdown A backdoor defense system for federated learning, designed to protect against data poisoning attacks by isolating subspace training and aggregating models with robust consensus fusion. 18
jonasgeiping/breaching A PyTorch framework for analyzing vulnerabilities in federated learning models and predicting data breaches 274
yunqing-me/attackvlm An adversarial attack framework on large vision-language models 165
jeremy313/soteria An implementation of a defense against model inversion attacks in federated learning 55
ibm/reprogrammble-fl Improves utility-privacy tradeoff in federated learning by reprogramming models to balance data utility and user privacy. 5
ymjs-irfan/dp-fedsam This repository provides an implementation of a differentially private federated learning algorithm designed to improve the robustness and performance of federated machine learning systems. 42
facebookresearch/spiritlm This repository provides an end-to-end language model capable of generating coherent text based on both spoken and written inputs. 845
ai-secure/dba A tool for demonstrating and analyzing attacks on federated learning systems by introducing backdoors into distributed machine learning models. 179
sap-samples/machine-learning-diff-private-federated-learning Simulates a federated learning setting to preserve individual data privacy 365
umbc-sanjaylab/fedpseudo_kdd23 This repository provides an implementation of federated survival analysis using a deep learning framework. 0