Fusion Science and Technology / Volume 74 / Number 1-2 / July-August 2018 / Pages 13-22
Technical Paper / dx.doi.org/10.1080/15361055.2017.1390390
Articles are hosted by Taylor and Francis Online.
Models that apply machine learning (ML) techniques for disruption prediction have improved detection rates and warning times in JET and other tokamaks. However, these models require an already stored database to develop them. Therefore, a significant problem arises at the time of training ML-based systems for ITER. To tackle this problem, this work computes a genetic algorithm–optimized predictor inspired by a previous study using initially only ASDEX-Upgrade (AUG) data and tested with the wide database of JET. This smaller-to-larger tokamak approach pursues the future extrapolation of this technique to ITER. The outcomes of direct application of a cross predictor resulted in 30.03% false alarms and more than 42% premature alarms, which indicates the need for different input parameters or at least some information about the target device to achieve reasonable performance.
In a second approach, a new model was created with the AUG database plus one disruptive and one nondisruptive pulse of JET. The final cross predictions (over the chronologically first 564 shots after training, 52 of them were disruptive) reached 100% of total detected disruptions (all of them with anticipation times up to 10 ms). The false alarms were 7.42%. The results decayed at the time newer shots were tested. This aging effect is a known phenomenon, and it can be tackled by periodic retraining of the system. As proof of principle, a final predictor was created in an adaptive approach, obtaining in the following 1000 pulses (52 of them disruptive) 91.75% detections with at least 10 ms of warning times and less than 1% false alarms.