A Review of Poisoning Attacks on Graph Neural Networks

Kihyun Seol, Yerin Lee, Seungyeop Song, Heejae Park, Laihyuk Park

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

1 Scopus citations

Abstract

A Graph Neural Network (GNN) is designed to generate effective node embeddings in graph-structured data. Therefore, GNNs are well-suited for tasks like node classification and graph generation. As their use has expanded, concerns about their security, robustness, and privacy have grown. This paper explores the various adversarial attacks that can be executed against G NN s, focusing on poisoning attacks, a specific type of adversarial attack. We examine key attack strategies and review recent research developments in each attack. Finally, we conclude with proposals aimed at enhancing the robustness of GNNs.

Original languageEnglish
Title of host publicationICTC 2024 - 15th International Conference on ICT Convergence
Subtitle of host publicationAI-Empowered Digital Innovation
PublisherIEEE Computer Society
Pages239-240
Number of pages2
ISBN (Electronic)9798350364637
DOIs
StatePublished - 2024
Event15th International Conference on Information and Communication Technology Convergence, ICTC 2024 - Jeju Island, Korea, Republic of
Duration: 16 Oct 202418 Oct 2024

Publication series

NameInternational Conference on ICT Convergence
ISSN (Print)2162-1233
ISSN (Electronic)2162-1241

Conference

Conference15th International Conference on Information and Communication Technology Convergence, ICTC 2024
Country/TerritoryKorea, Republic of
CityJeju Island
Period16/10/2418/10/24

Keywords

  • Adversarial attack
  • GNNs
  • poisoning attack

Fingerprint

Dive into the research topics of 'A Review of Poisoning Attacks on Graph Neural Networks'. Together they form a unique fingerprint.

Cite this