- English
Volume (Issue): 13 (24)
Illumination variations in non-atmospherically corrected high-resolution satellite (HRS) images acquired at different dates/times/locations pose a major challenge for large-area environmental mapping and monitoring. This problem is exacerbated in cases where a classification model is trained only on one image (and often limited training data) but applied to other scenes without collecting additional samples from these new images. In this research, by focusing on caribou lichen mapping, we evaluated the potential of using conditional Generative Adversarial Networks (cGANs) for the normalization of WorldView-2 (WV2) images of one area to a source WV2 image of another area on which a lichen detector model was trained. In this regard, we considered an extreme case where the classifier was not fine-tuned on the normalized images. We tested two main scenarios to normalize four target WV2 images to a source 50 cm pansharpened WV2 image: (1) normalizing based only on the WV2 panchromatic band, and (2) normalizing based on the WV2 panchromatic band and Sentinel-2 surface reflectance (SR) imagery. Our experiments showed that normalizing even based only on the WV2 panchromatic band led to a significant lichen-detection accuracy improvement compared to the use of original pansharpened target images. However, we found that conditioning the cGAN on both the WV2 panchromatic band and auxiliary information (in this case, Sentinel-2 SR imagery) further improved normalization and the subsequent classification results due to adding a more invariant source of information. Our experiments showed that, using only the panchromatic band, F1-score values ranged from 54% to 88%, while using the fused panchromatic and SR, F1-score values ranged from 75% to 91%.
- English
Volume (Issue): 13 (24)