loading page

One-dimensional DCNN Feature Selective Transformation with LSTM-RDN for Image Classification
  • +1
  • li chaorong ,
  • Yuanyuan Huang ,
  • WEI HUANG ,
  • Fengqing Qin
li chaorong
Author Profile
Yuanyuan Huang
Author Profile
WEI HUANG
Author Profile
Fengqing Qin
Author Profile

Abstract

Feature selection and transformation are the important techniques in machine learning field. A good feature selection / transformation method will greatly improve the performance of classification algorithm. In this work, we proposed a simple but efficient image classification method which is based on two-stage processing strategy. In the first processing stage, the one-dimensional features are obtained from image by transfer learning with the pre-trained Deep Convolutional Neural Networks (DCNN). These one-dimensional DCNN features still have the shortcomings of information redundancy and weak distinguishing ability. Therefore, it is necessary to use feature transformation to continue to obtain more distinguishable features. We propose a feature learning and selective transformation network based on Long Short-Term Memory (LSTM) combing ReLU and Dropout layers (called LSTM-RDN) to further process DCNN one-dimensional features. . The verification experiments were conducted on three public object image datasets(Cifar10, Cifar100 and Fashion-MNIST), three fine-grained image datasets(CUB200-2011, Stanford-Cars, FGVC-Aircraft) and a COVID-19 dataset. In the experiments, we used several backbone network models, including AlexNet, VGG16, ResNet18, ResNet101, InceptionV2 and EfficientNet-b0. Experimental results have shown that through feature selective transformation, the recognition accuracy of these DCNN models can significantly exceed the classification accuracies of the state-of-the-art methods.