TY - JOUR
T1 - A Privacy-Preserving Local Differential Privacy-Based Federated Learning Model to Secure LLM from Adversarial Attacks
AU - Salim, Mikail Mohammed
AU - Deng, Xianjun
AU - Park, Jong Hyuk
N1 - Publisher Copyright:
© (2023), (Korea Information Processing Society). All Rights Reserved.
PY - 2024
Y1 - 2024
N2 - Chatbot applications using large language models (LLMs) offer human-like responses to user queries, but their widespread use raises significant concerns about data privacy and integrity. Adversarial attacks can extract confidential data during model training and submit poisoned data, compromising chatbot reliability. Additionally, the transmission of unencrypted user data for local model training poses new privacy challenges. This paper addresses these issues by proposing a blockchain and federated learning-enabled LLM model to ensure user data privacy and integrity. A local differential privacy method adds noise to anonymize user data during the data collection phase for local training at the edge layer. Federated learning prevents the sharing of private local training data with the cloud-based global model. Secure multi-party computation using secret sharing and blockchain ensures secure and reliable model aggregation, preventing adversarial model poisoning. Evaluation results show a 46% higher accuracy in global model training compared to models trained with poisoned data. The study demonstrates that the proposed local differential privacy method effectively prevents adversarial attacks and protects federated learning models from poisoning during training, enhancing the security and reliability of chatbot applications.
AB - Chatbot applications using large language models (LLMs) offer human-like responses to user queries, but their widespread use raises significant concerns about data privacy and integrity. Adversarial attacks can extract confidential data during model training and submit poisoned data, compromising chatbot reliability. Additionally, the transmission of unencrypted user data for local model training poses new privacy challenges. This paper addresses these issues by proposing a blockchain and federated learning-enabled LLM model to ensure user data privacy and integrity. A local differential privacy method adds noise to anonymize user data during the data collection phase for local training at the edge layer. Federated learning prevents the sharing of private local training data with the cloud-based global model. Secure multi-party computation using secret sharing and blockchain ensures secure and reliable model aggregation, preventing adversarial model poisoning. Evaluation results show a 46% higher accuracy in global model training compared to models trained with poisoned data. The study demonstrates that the proposed local differential privacy method effectively prevents adversarial attacks and protects federated learning models from poisoning during training, enhancing the security and reliability of chatbot applications.
KW - Blockchain
KW - Federated Learning
KW - Local Differential Privacy
KW - Secret Sharing
UR - https://www.scopus.com/pages/publications/85204926586
U2 - 10.22967/HCIS.2024.14.057
DO - 10.22967/HCIS.2024.14.057
M3 - Article
AN - SCOPUS:85204926586
SN - 2192-1962
VL - 14
JO - Human-centric Computing and Information Sciences
JF - Human-centric Computing and Information Sciences
M1 - 57
ER -