Evaluating the Resilience of Graph Neural Network Architectures to Adversarial and Noisy Data in High-Stakes Construction Project Management

Research output: Contribution to journalArticlepeer-review

Abstract

High-stakes mega-construction projects present a challenging environment for decision-support models, as they are exposed to risks from both deliberate attacks and unintentional errors. These vulnerabilities can degrade model performance, leading to costly decision-making mistakes. Our focus centers on two major classes of adversarial machine learning attacks, analyzing their consequences on predictive accuracy in graph-structured data containing 267,763 activity records. The first class is data poisoning, where the training set is deliberately corrupted through actions like label flipping, assigning random labels, or feature manipulations, which impair the model’s capacity to learn effectively before deployment. The second class is evasion attacks, such as the fast gradient sign method (FGSM), which strategically perturbs input features by leveraging gradient information to mislead the model during inference. We assess the impact of these attacks on predictive accuracy in graph-structured data with 267,763 activity records. GatedGNN outperformed GCN, GAT, and MPNN against poisoning attacks, consistently achieving a >75% F1 score across all data sets, even when subjected to label flipping, the most damaging method. Benchmark models (GAT, GCN, MPNN) experience comparable F1 losses under random labels and feature manipulation, but GatedGNN slightly benefits from mild feature noise due to its gating mechanism. Yet, FGSM at test time critically damages GatedGNN, causing its average F1 to drop from 88% on clean data to 5–15%, whereas GCN, GAT, and MPNN sustain around 55–57%. The findings highlight that robustness is threat-specific: GatedGNN’s gating protects against poisoned messages but generates smooth gradients exploitable by FGSM. Practitioners should combine GatedGNN with input sanitization or adversarial training against live-sensor spoofing, while relying on its superior performance against poisoned historical records. For real-time threats, extra defenses are essential.

Original languageEnglish
Article number04026029
JournalJournal of Construction Engineering and Management - ASCE
Volume152
Issue number4
DOIs
Publication statusPublished - 1 Apr 2026

Bibliographical note

Publisher Copyright:
© 2026 American Society of Civil Engineers.

Keywords

  • Adversarial attacks
  • Construction project management
  • Data noise
  • Data poisoning
  • Decision support systems
  • Graph neural networks (GNN)
  • Machine learning (ML) robustness

Fingerprint

Dive into the research topics of 'Evaluating the Resilience of Graph Neural Network Architectures to Adversarial and Noisy Data in High-Stakes Construction Project Management'. Together they form a unique fingerprint.

Cite this