Synthetic Neural Network: Weight Divergence Optimizer

Any endeavors to explain the network behavior, always falls in the
network optimizer explanation trap and gets into the complication of
algorithm math operation loop without reaching the final decision of how
it really works. Weight divergence optimizer, which is a subcomponent of
Synthetic Neural Network explains the inner works of neural network in a
pure logical operation using only the dot product multiplication to
illustrate this behavior and proposed a formula to calculate bias and
weights. The new method uses the network divergence theory instead of
trial-and-error method by calculating the maximum variation value
between weight and its interior class patterns compared with the minimum
variation value of the same weight with exterior patterns of other
classes, the difference between the two values used in the formula.
Synthetic Neural Network deals with many challenges starting from how
Neural Network works and get its result, extending to reducing the size
of training dataset, huge memory consumption, extensive processor
calculation and creditability. The network output shows a promising
result in testing and validation stage, it grants an accuracy of 90% to
75% with two and nine classes when using USPS dataset. Also, it
produces accuracy of 95% to71% when using RMNIST expanded with the
same classes. In audio patterns with MFCC feature extractions it gives
accuracy of 73% with 10 classes. All the training operation performed
between 0.5 to 15.5 seconds. The proposed method uses the lowest number
of neurons that ever used with two levels of Neural Network, that helps
to reduce the network size, discards redundant patterns, removes
corrupted inputs and makes the network to converge in a few seconds with
the lowest number of input dataset used