You May Also Enjoy
Climate AI Nordics Newsletter, October 2025
Published:
In our October newsletter, we celebrate one year of Climate AI Nordics with a look back at our community's key milestones and achievements. This issue also features a new paper on urban change detection, details on upcoming workshops at EurIPS 2025, and several new job opportunities.
Junior researcher in machine learning
Published:
RISE is hiring a junior researcher in machine learning to develop AI-based tools for creating smarter, more sustainable cities and to help build a greener future.
Multi-resolution learning with neural operators and long short-term memory neural networks
Event date:
Webinar with Katarzyna Michalowska, SINTEF. In real-world applications, collecting large amounts of high-resolution data is rarely practical. The cost of specialized equipment, time-intensive measurements, and expensive high-fidelity simulations often means that high-resolution samples are limited. Meanwhile, low-resolution data are far easier to obtain and often exist in abundance, creating a common scenario: plentiful coarse data but limited fine data. Standard neural networks, which require fixed-resolution inputs, cannot exploit this imbalance and typically perform poorly when generalizing across resolutions. Deep operator networks, or DeepONets offer a distinct advantage over standard neural networks through a property known as discretization invariance, enabling learning across varying data resolutions. However, DeepONets alone do not effectively capture long-term temporal dependencies, limiting their performance on problems involving long time horizons. In this talk, we present a framework that addresses both challenges: multi-resolution learning and long-time horizon modelling. We achieve this by extending DeepONets with long short-term memory networks (LSTMs) and introducing a multi-stage training procedure that leverages data at multiple resolutions. This hybrid architecture first learns global dynamics from abundant low-resolution data and then fine-tunes on limited high-resolution samples, capturing both multi-resolution structure and temporal dependencies. In tests on nonlinear dynamical systems, our multi-resolution DON-LSTM achieves lower generalization error and requires fewer high-resolution samples than standard DeepONet or LSTM models. Our results demonstrate that the proposed approach is well-suited for real-world scenarios where high-resolution data are limited, highlighting its potential for practical applications across science and engineering.
Self-supervised pre-training for glacier calving front extraction from synthetic aperture radar imagery
Event date:
Webinar with Nora Gourmelon, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU). The factors influencing melt at glacier fronts facing the ocean remain an active area of research. To better understand these factors, it is essential to monitor changes in glacier calving fronts. Due to the importance of weather-dependent and seasonal variations, Synthetic Aperture Radar (SAR) is the preferred imaging method for such monitoring. In recent years, deep learning models have been developed to automate the extraction of calving front positions, however, their performance on SAR data remains suboptimal. Limited labeled data and high variability in SAR images hinder traditional supervised learning. Foundation models pre-trained on large, diverse datasets could provide robust feature representations that require minimal labeled data for fine-tuning on specific SAR tasks. However, in preliminary experiments, we found that the domain gap is too large for the task of extracting calving fronts from SAR imagery, and foundation models fine-tuned on the "CAlving Fronts and where to Find thEm" (CaFFe) benchmark dataset performed subpar. Therefore, we compiled an unlabeled dataset of Sentinel-1 SAR image sequences of Arctic glaciers, each associated with a single Sentinel-2 optical reference image. Using this dataset, we developed a novel multi-modal self-supervised pre-training strategy and applied it to pre-train a hybrid CNN-transformer model. Fine-tuning on the CaFFe benchmark showed that the pre-trained model outperforms its non-pre-trained counterpart, enabling more robust calving front segmentation and demonstrating the potential of data-efficient learning for SAR imagery.

