Accurate Patch Based Satellite Image Fusion Using Deep Convolution Neural Network

  • Unique Paper ID: 160757
  • Volume: 10
  • Issue: 1
  • PageNo: 1131-1139
  • Abstract:
  • Spatiotemporal fusion can combine Landsat and MODIS photos, which have complimentary spatial and temporal features, to create high-resolution data. A deep convolutional neural network (DCNN)-based spatial fusing method is presented in this work to manage vast remote sensing data in useful applications. Landsat image areas are derived from low-resolution MODIS photos. A large number of training patches are previously collected with respect to different attributes such as color, edge and pixel statistics. Further the images are patched by 3x3 block size and converted to array to perform the DCNN training process. Regression based DCNN training algorithm is performed to predict the missing information from the local patches. The trained DCNN module is further used to generate the predicted output using the same procedure used for the training process. In the testing, finally the patches are converted into matrix to obtain the complete image output. Two standard files from Landsat–MODIS are extensively evaluated. Images are assessed using RMSE, MAE, RMAE, and MACE metrics. The proposed strategy yields more precise fusing results than sparse representation-based methods. From the execution of this work an average of RMSE, MAE, RMAE and MACE of 0.004, 0.09, 0.02, and 4.65 respectively achieved for multiple images from dataset.

Cite This Article

  • ISSN: 2349-6002
  • Volume: 10
  • Issue: 1
  • PageNo: 1131-1139

Accurate Patch Based Satellite Image Fusion Using Deep Convolution Neural Network

Related Articles