Abstract—Recently, there have been many methods of super resolution proposed in the literature, in which convolutional neural networks have been confirmed to achieve good results. C. Dong et al. proposed a convolutional neural network structure (SRCNN) to effectively solve the super resolution problem. J. Kim et al. proposed a much deeper convolutional neural network (VDSR) to improve C. Dong et al.’s method. However, unlike VDSR proposed by J. Kim et al. which trained residue images, SRCNN proposed by C. Dong et al. directly trained high-resolution images. Consequently, we surmise the improvement of VDSR is due to not only to the depth of the neural network structure but also the training of residue images. This paper studies and compares the performance of training high-resolution images and training residue images associated with the two neural network structures, SRCNN and VDSR. Some deep CNNs proceed zero padding which pads the input to each convolutional layer with zeros around the border so the feature maps remain the same size. SRCNN proposed by C. Dong et al. does not carry out padding, so the size of the resulting high-resolution images is smaller than expected. The study also proposes two revised versions of SRCNN that remain the size the same as the input image.
Index Terms—super resolution, convolutional networks, bicubic interpolation, deep learning, underdetermined inverse problem
Cite: Hwei Jen Lin, Yoshimasa Tokuyama, and Zi Jun Lin, "Residual Learning Based Convolutional Neural Network for Super Resolution," Journal of Image and Graphics, Vol. 7, No. 4, pp. 126-129, December 2019. doi: 10.18178/joig.7.4.126-129
Copyright © 2012-2020 Journal of Image and Graphics, All Rights Reserved