loading page

Fault Tolerant Triplet Networks for Training and Inference
  • +3
  • Ziheng Wang ,
  • Farzad Niknia ,
  • Shanshan Liu ,
  • Pedro Reviriego ,
  • Ahmed Louri ,
  • Fabrizio Lombardi
Ziheng Wang
Author Profile
Farzad Niknia
Author Profile
Shanshan Liu
New Mexico State University

Corresponding Author:[email protected]

Author Profile
Pedro Reviriego
Author Profile
Ahmed Louri
Author Profile
Fabrizio Lombardi
Author Profile


This paper deals with the fault tolerance of Triplet Networks (TNs). Results based on extensive analysis and simulation by fault injection are presented for new schemes. As in accordance with technical literature, stuck-at faults are considered in the fault model for the training process. Simulation by fault injection shows that the TNs are not sensitive to this type of fault in the general case; however, an unexcepted failure (leading to network convergence to false solutions) can occur when the faults are in the negative subnetwork. Analysis for this specific case is provided and remedial solutions are proposed (namely the use of the loss function with regularized anchor outputs for stuck-at 0 faults and a modified margin for stuck-at 1/-1 faults). Simulation proves that false solutions can be very efficiently avoided by utilizing the proposed techniques. Random bit-flip faults are then considered in the fault model for the inference process. This paper analyzes the error caused by bit-flips on different bit positions in a TN with Floating-Point (FP) format and compares it with a fault- tolerant Stochastic Computing (SC) implementation. Analysis and simulation of the TNs confirm that the main degradation is caused by bit-flips on the exponent bits. Therefore, protection schemes are proposed to handle those errors; they replace least significant bits of the FP numbers with parity bits for both single- and multi-bit errors. The proposed methods achieve superior performance compared to other low-cost fault tolerant schemes found in the technical literature by reducing the classification accuracy loss of TNs by 96.76% (97.74%) for single-bit (multi-bit errors).