lstmrdn.pdf (2.13 MB)

One-dimensional DCNN Feature Selective Transformation with LSTM-RDN for Image Classification

Download (2.13 MB)
posted on 02.06.2021, 21:00 by li chaorong, Yuanyuan Huang, WEI HUANG, Fengqing Qin
Feature selection and transformation are the important techniques in machine learning field. A good feature selection or transformation will greatly improve the performance of classification method. In this work, we proposed a simple but efficient image classification method which is based on two-stage processing strategy. In the first stage, the one-dimensional features are obtained from image by transfer learning with the pre-trained Deep Convolutional Neural Networks (DCNN). These one-dimensional DCNN features still have the shortcomings of information redundancy and weak distinguishing ability. Therefore, it is necessary to use feature transformation to further obtain more discriminative features. We propose a feature learning and selective transformation network based on Long Short-Term Memory (LSTM) combing ReLU and Dropout layers (called LSTM-RDN) to further process one-dimensional DCNN features. The verification experiments were conducted on three public object image datasets (Cifar10, Cifar100 and Fashion-MNIST), three fine-grained image datasets (CUB200-2011, Stanford-Cars, FGVC-Aircraft) and a COVID-19 dataset, and several backbone network models were used, including AlexNet, VGG16, ResNet18, ResNet101, InceptionV2 and EfficientNet-b0. Experimental results have shown that the recognition performance of the proposed method can significantly exceed the performance of existing state-of-the-art methods. The level of machine vision classification has reached the bottleneck, it is difficult to solve this problem by using a large-scale network model which has huge parameters that need to be optimized. We present an effective approach for breaking through the bottleneck of visual classification task by feature extraction with backbone DCNN and feature selective transformation with LSTM-RDN, separately. The code and pre-trained models are available from:


Email Address of Submitting Author

Submitting Author's Institution

yibin university

Submitting Author's Country