Home > Published Issues > 2018 > Volume 6, No. 2, December 2018 >

Improving Robustness of Neural Networks against Bit Flipping Errors during Inference

Minghai Qin, Chao Sun, and Dejan Vucinic
Western Digital Coporation, San Jose, California, USA

Abstract—We study the trade-offs between prediction accuracy and storage redundancy of neural networks that are stored in noisy storage media. Parameters of a trained neural network are commonly stored as binary data and it is usually assumed that the data storage and retrieval are error-free. This assumption is based upon the common use of Error Correcting Codes (ECCs) that correct bit flips in storage media. However, ECCs incur capacity and power overhead (10% to 20%) and thus increase cost and reduce the effective bandwidth when retrieving trained parameters from storage during inference. We measured the robustness of several deep neural network architectures and datasets when bit flipping errors exist but ECCs are not used during inference. It is observed that more sophisticated architectures and datasets are generally more vulnerable to bit flipping errors. We propose a simple parameter error detection method, called weight nulling, that can universally improve the robustness from twice to several orders of magnitude depending on network architectures.

Index Terms—neural networks, bit flips, error detection

Cite: Minghai Qin, Chao Sun, and Dejan Vucinic, "Improving Robustness of Neural Networks against Bit Flipping Errors during Inference," Journal of Image and Graphics, Vol. 6, No. 2, pp. 181-186, December 2018. doi: 10.18178/joig.6.2.181-186