Home / Publications / Journals / Nuclear Technology / Volume 206 / Number 1
Nuclear Technology / Volume 206 / Number 1 / January 2020 / Pages 82-93
Technical Paper / dx.doi.org/10.1080/00295450.2019.1626170
Articles are hosted by Taylor and Francis Online.
Model-based defenses have been promoted over the past decade as essential defenses against intrusion and data deception attacks into the control network used to digitally regulate the operation of critical industrial systems such as nuclear reactors. The idea is that physics-based models could differentiate between genuine, i.e., unaltered by adversaries, and malicious network engineering data, e.g., flowrates, temperatures, etc. Machine learning techniques have also been proposed to further improve the differentiating power of model-based defenses by constantly monitoring the engineering data for any possible deviations that are not consistent with the physics. While this is a sound premise, critical systems, such as nuclear reactors, chemical plants, oil and gas plants, etc., share a common disadvantage: almost any information about them can be obtained by determined adversaries, such as state-sponsored attackers. Thus, one must question whether model-based defenses would be resilient under these extreme adversarial conditions. This paper represents a first step toward answering this question. Specifically, we introduce self-learning techniques, including both pure data-driven, e.g., deep neural networks, and physics-based techniques able to predict dynamic behavior for a nuclear reactor model. The results indicate that if attackers are technically capable, they can learn very accurate models for reactor behavior, which raises concerns about the effectiveness of model-based defenses.