A Privacy-Preserving Local Differential Privacy-Based Federated Learning Model to Secure LLM from Adversarial Attacks

Research output: Contribution to journalArticlepeer-review

7 Scopus citations

Abstract

Chatbot applications using large language models (LLMs) offer human-like responses to user queries, but their widespread use raises significant concerns about data privacy and integrity. Adversarial attacks can extract confidential data during model training and submit poisoned data, compromising chatbot reliability. Additionally, the transmission of unencrypted user data for local model training poses new privacy challenges. This paper addresses these issues by proposing a blockchain and federated learning-enabled LLM model to ensure user data privacy and integrity. A local differential privacy method adds noise to anonymize user data during the data collection phase for local training at the edge layer. Federated learning prevents the sharing of private local training data with the cloud-based global model. Secure multi-party computation using secret sharing and blockchain ensures secure and reliable model aggregation, preventing adversarial model poisoning. Evaluation results show a 46% higher accuracy in global model training compared to models trained with poisoned data. The study demonstrates that the proposed local differential privacy method effectively prevents adversarial attacks and protects federated learning models from poisoning during training, enhancing the security and reliability of chatbot applications.

Original languageEnglish
Article number57
JournalHuman-centric Computing and Information Sciences
Volume14
DOIs
StatePublished - 2024

Keywords

  • Blockchain
  • Federated Learning
  • Local Differential Privacy
  • Secret Sharing

Fingerprint

Dive into the research topics of 'A Privacy-Preserving Local Differential Privacy-Based Federated Learning Model to Secure LLM from Adversarial Attacks'. Together they form a unique fingerprint.

Cite this