Accurate Patch Based Satellite Image Fusion Using Deep Convolution Neural Network
Author(s):
P. Veena, C. C. Manju
Keywords:
Spatiotemporal, remote sensing, MODIS images, deep convolutional neural network
Abstract
Spatiotemporal fusion can combine Landsat and MODIS photos, which have complimentary spatial and temporal features, to create high-resolution data. A deep convolutional neural network (DCNN)-based spatial fusing method is presented in this work to manage vast remote sensing data in useful applications. Landsat image areas are derived from low-resolution MODIS photos. A large number of training patches are previously collected with respect to different attributes such as color, edge and pixel statistics. Further the images are patched by 3x3 block size and converted to array to perform the DCNN training process. Regression based DCNN training algorithm is performed to predict the missing information from the local patches. The trained DCNN module is further used to generate the predicted output using the same procedure used for the training process. In the testing, finally the patches are converted into matrix to obtain the complete image output. Two standard files from Landsat–MODIS are extensively evaluated. Images are assessed using RMSE, MAE, RMAE, and MACE metrics. The proposed strategy yields more precise fusing results than sparse representation-based methods. From the execution of this work an average of RMSE, MAE, RMAE and MACE of 0.004, 0.09, 0.02, and 4.65 respectively achieved for multiple images from dataset.
Article Details
Unique Paper ID: 160757
Publication Volume & Issue: Volume 10, Issue 1
Page(s): 1131 - 1139
Article Preview & Download
Share This Article
Conference Alert
NCSST-2023
AICTE Sponsored National Conference on Smart Systems and Technologies
Last Date: 25th November 2023
SWEC- Management
LATEST INNOVATION’S AND FUTURE TRENDS IN MANAGEMENT