Self-supervised pre-training for glacier calving front extraction from synthetic aperture radar imagery
Event date:
Webinar with Nora Gourmelon, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU). The factors influencing melt at glacier fronts facing the ocean remain an active area of research. To better understand these factors, it is essential to monitor changes in glacier calving fronts. Due to the importance of weather-dependent and seasonal variations, Synthetic Aperture Radar (SAR) is the preferred imaging method for such monitoring. In recent years, deep learning models have been developed to automate the extraction of calving front positions, however, their performance on SAR data remains suboptimal. Limited labeled data and high variability in SAR images hinder traditional supervised learning. Foundation models pre-trained on large, diverse datasets could provide robust feature representations that require minimal labeled data for fine-tuning on specific SAR tasks. However, in preliminary experiments, we found that the domain gap is too large for the task of extracting calving fronts from SAR imagery, and foundation models fine-tuned on the "CAlving Fronts and where to Find thEm" (CaFFe) benchmark dataset performed subpar. Therefore, we compiled an unlabeled dataset of Sentinel-1 SAR image sequences of Arctic glaciers, each associated with a single Sentinel-2 optical reference image. Using this dataset, we developed a novel multi-modal self-supervised pre-training strategy and applied it to pre-train a hybrid CNN-transformer model. Fine-tuning on the CaFFe benchmark showed that the pre-trained model outperforms its non-pre-trained counterpart, enabling more robust calving front segmentation and demonstrating the potential of data-efficient learning for SAR imagery.