Accurate Patch Based Satellite Image Fusion Using Deep Convolution Neural Network

  • Unique Paper ID: 160757
  • Volume: 10
  • Issue: 1
  • PageNo: 1131-1139
  • Abstract:
  • Spatiotemporal fusion can combine Landsat and MODIS photos, which have complimentary spatial and temporal features, to create high-resolution data. A deep convolutional neural network (DCNN)-based spatial fusing method is presented in this work to manage vast remote sensing data in useful applications. Landsat image areas are derived from low-resolution MODIS photos. A large number of training patches are previously collected with respect to different attributes such as color, edge and pixel statistics. Further the images are patched by 3x3 block size and converted to array to perform the DCNN training process. Regression based DCNN training algorithm is performed to predict the missing information from the local patches. The trained DCNN module is further used to generate the predicted output using the same procedure used for the training process. In the testing, finally the patches are converted into matrix to obtain the complete image output. Two standard files from Landsat–MODIS are extensively evaluated. Images are assessed using RMSE, MAE, RMAE, and MACE metrics. The proposed strategy yields more precise fusing results than sparse representation-based methods. From the execution of this work an average of RMSE, MAE, RMAE and MACE of 0.004, 0.09, 0.02, and 4.65 respectively achieved for multiple images from dataset.

Copyright & License

Copyright © 2025 Authors retain the copyright of this article. This article is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

BibTeX

@article{160757,
        author = {P. Veena and C. C. Manju},
        title = {Accurate Patch Based Satellite Image Fusion Using Deep Convolution Neural Network},
        journal = {International Journal of Innovative Research in Technology},
        year = {},
        volume = {10},
        number = {1},
        pages = {1131-1139},
        issn = {2349-6002},
        url = {https://ijirt.org/article?manuscript=160757},
        abstract = {Spatiotemporal fusion can combine Landsat and MODIS photos, which have complimentary spatial and temporal features, to create high-resolution data. A deep convolutional neural network (DCNN)-based spatial fusing method is presented in this work to manage vast remote sensing data in useful applications. Landsat image areas are derived from low-resolution MODIS photos. A large number of training patches are previously collected with respect to different attributes such as color, edge and pixel statistics. Further the images are patched by 3x3 block size and converted to array to perform the DCNN training process. Regression based DCNN training algorithm is performed to predict the missing information from the local patches. The trained DCNN module is further used to generate the predicted output using the same procedure used for the training process. In the testing, finally the patches are converted into matrix to obtain the complete image output. Two standard files from Landsat–MODIS are extensively evaluated. Images are assessed using RMSE, MAE, RMAE, and MACE metrics. The proposed strategy yields more precise fusing results than sparse representation-based methods. From the execution of this work an average of RMSE, MAE, RMAE and MACE of 0.004, 0.09, 0.02, and 4.65 respectively achieved for multiple images from dataset.},
        keywords = {Spatiotemporal, remote sensing, MODIS images, deep convolutional neural network},
        month = {},
        }

Cite This Article

  • ISSN: 2349-6002
  • Volume: 10
  • Issue: 1
  • PageNo: 1131-1139

Accurate Patch Based Satellite Image Fusion Using Deep Convolution Neural Network

Related Articles