Archive Layout with Content

A variety of common markup showing how the theme styles them.

Header one

Header two

Header three

Header four

Header five
Header six

Blockquotes

Single line blockquote:

Quotes are cool.

Tables

EntryItem 
John Doe2016Description of the item in the list
Jane Doe2019Description of the item in the list
Doe Doe2022Description of the item in the list
Header1Header2Header3
cell1cell2cell3
cell4cell5cell6
cell1cell2cell3
cell4cell5cell6
Foot1Foot2Foot3

Definition Lists

Definition List Title
Definition list division.
Startup
A startup company or startup is a company or temporary organization designed to search for a repeatable and scalable business model.
#dowork
Coined by Rob Dyrdek and his personal body guard Christopher “Big Black” Boykins, “Do Work” works as a self motivator, to motivating your friends.
Do It Live
I’ll let Bill O’Reilly explain this one.

Unordered Lists (Nested)

Ordered List (Nested)

  1. List item one
    1. List item one
      1. List item one
      2. List item two
      3. List item three
      4. List item four
    2. List item two
    3. List item three
    4. List item four
  2. List item two
  3. List item three
  4. List item four

Buttons

Make any link standout more when applying the .btn class.

Notices

Watch out! You can also add notices by appending {: .notice} to a paragraph.

HTML Tags

Address Tag

1 Infinite Loop
Cupertino, CA 95014
United States

This is an example of a link.

Abbreviation Tag

The abbreviation CSS stands for “Cascading Style Sheets”.

Cite Tag

“Code is poetry.” —Automattic

Code Tag

You will learn later on in these tests that word-wrap: break-word; will be your best friend.

Strike Tag

This tag will let you strikeout text.

Emphasize Tag

The emphasize tag should italicize text.

Insert Tag

This tag should denote inserted text.

Keyboard Tag

This scarcely known tag emulates keyboard text, which is usually styled like the <code> tag.

Preformatted Tag

This tag styles large blocks of code.

.post-title {
  margin: 0 0 5px;
  font-weight: bold;
  font-size: 38px;
  line-height: 1.2;
  and here's a line of some really, really, really, really long text, just to see how the PRE tag handles it and to find out how it overflows;
}

Quote Tag

Developers, developers, developers… –Steve Ballmer

Strong Tag

This tag shows bold text.

Subscript Tag

Getting our science styling on with H2O, which should push the “2” down.

Superscript Tag

Still sticking with science and Isaac Newton’s E = MC2, which should lift the 2 up.

Variable Tag

This allows you to denote variables.

High-stakes decisions from low-quality data: AI decision-making for conservation

Event date:

Webinar with Lily Xu, Columbia University. Like many of society's grand challenges, biodiversity conservation requires effectively allocating and managing our limited resources in the face of imperfect information. My research develops data-driven AI decision-making methods to do so, overcoming the messy data ubiquitous in these settings. Here, I’ll present technical advances in machine learning, reinforcement learning, and causal inference, addressing research questions that emerged from on-the-ground challenges in wildlife conservation. I’ll also discuss bridging the gap from research and practice, with anti-poaching field tests in Cambodia, field visits in Belize and Uganda, and large-scale deployment with SMART conservation software.

Debiasing AI predictions for causal inference without fresh ground truth data

Event date:

Webinar with Markus Pettersson, Chalmers University of Technology. Machine learning models trained on Earth observation data, particularly satellite imagery, have recently shown impressive performance in predicting household-level wealth indices, potentially addressing chronic data scarcity in global development research. While these predictions exhibit strong predictive power, they inherently suffer from shrinkage toward the mean, resulting in attenuated estimates of causal treatment effects and thus limiting their utility in policy evaluations. Existing debiasing methods, such as Prediction-Powered Inference (PPI), require additional fresh ground-truth data at the downstream causal inference stage, severely restricting their applicability in data-poor environments. In this paper, we introduce and rigorously evaluate two novel correction methods—linear calibration correction and Tweedie's correction—that substantially reduce prediction bias without relying on newly collected labeled data. Our methods operate on out-of-sample predictions from pre-trained models, treating these models as black-box functions. Linear calibration corrects bias through a straightforward linear transformation derived from held-out calibration data, while Tweedie's correction leverages empirical Bayes principles to directly address shrinkage-induced biases by exploiting score functions derived from predicted outcomes. Through analytical exercises and experiments using Demographic and Health Survey (DHS) data, we demonstrate that both proposed methods outperform existing data-free approaches, can achieve significant reductions in attenuation bias and thus providing more accurate, actionable, and policy-relevant estimates. Our approach represents a generalizable, lightweight toolkit that enhances the reliability of causal inference when direct outcome measures are limited or unavailable.

Generative domain adaptation and foundation models for robust Earth observation

Event date:

Webinar with Georges Le Bellier, Conservatoire National des Arts et Métiers. Deep learning for remote sensing plays a crucial role in turning satellite and aerial imagery into dependable, real-world insights. However, Earth observation models must handle diverse environments, sensors, and conditions—such as clouds, seasonal shifts, and geographic differences—while still producing accurate results. In this talk, we explore two paths that lead to more robust and adaptable algorithms: generative domain adaptation and geospatial foundation models. First, I will introduce FlowEO, a generative approach of Unsupervised Domain Adaptation (UDA) for Earth observation, and show its high performance in UDA scenarios for several downstream tasks, including dense prediction and classification. This flow-matching-based translation method improves pretrained predictive models' accuracies in challenging scenarios such as post-disaster response and high cloud coverage cases with SAR-to-optical translation. FlowEO’s generative domain adaptation method is independent of the downstream task and does not require retraining the predictive model. Then, I will present “PANGAEA: A Global and Inclusive Benchmark for Geospatial Foundation Models“, a standardized evaluation protocol that covers a diverse set of datasets, dense prediction tasks, resolutions, sensor modalities, and temporalities. This benchmark includes comparison between geospatial foundation models but also with supervised baselines, namely U-Net and ViT, and highlights the strengths and weaknesses of GFMs. In addition, PANGAEA evaluates models’ accuracy in cases where labels are limited and questions the impact of multi-temporal data for GFMs.

Generative domain adaptation and foundation models for robust Earth observation

Event date:

Webinar with Georges Le Bellier, Conservatoire National des Arts et Métiers. Deep learning for remote sensing plays a crucial role in turning satellite and aerial imagery into dependable, real-world insights. However, Earth observation models must handle diverse environments, sensors, and conditions—such as clouds, seasonal shifts, and geographic differences—while still producing accurate results. In this talk, we explore two paths that lead to more robust and adaptable algorithms: generative domain adaptation and geospatial foundation models. First, I will introduce FlowEO, a generative approach of Unsupervised Domain Adaptation (UDA) for Earth observation, and show its high performance in UDA scenarios for several downstream tasks, including dense prediction and classification. This flow-matching-based translation method improves pretrained predictive models' accuracies in challenging scenarios such as post-disaster response and high cloud coverage cases with SAR-to-optical translation. FlowEO’s generative domain adaptation method is independent of the downstream task and does not require retraining the predictive model. Then, I will present “PANGAEA: A Global and Inclusive Benchmark for Geospatial Foundation Models“, a standardized evaluation protocol that covers a diverse set of datasets, dense prediction tasks, resolutions, sensor modalities, and temporalities. This benchmark includes comparison between geospatial foundation models but also with supervised baselines, namely U-Net and ViT, and highlights the strengths and weaknesses of GFMs. In addition, PANGAEA evaluates models’ accuracy in cases where labels are limited and questions the impact of multi-temporal data for GFMs.

Self-supervised pre-training for glacier calving front extraction from synthetic aperture radar imagery

Event date:

Webinar with Nora Gourmelon, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU). The factors influencing melt at glacier fronts facing the ocean remain an active area of research. To better understand these factors, it is essential to monitor changes in glacier calving fronts. Due to the importance of weather-dependent and seasonal variations, Synthetic Aperture Radar (SAR) is the preferred imaging method for such monitoring. In recent years, deep learning models have been developed to automate the extraction of calving front positions, however, their performance on SAR data remains suboptimal. Limited labeled data and high variability in SAR images hinder traditional supervised learning. Foundation models pre-trained on large, diverse datasets could provide robust feature representations that require minimal labeled data for fine-tuning on specific SAR tasks. However, in preliminary experiments, we found that the domain gap is too large for the task of extracting calving fronts from SAR imagery, and foundation models fine-tuned on the "CAlving Fronts and where to Find thEm" (CaFFe) benchmark dataset performed subpar. Therefore, we compiled an unlabeled dataset of Sentinel-1 SAR image sequences of Arctic glaciers, each associated with a single Sentinel-2 optical reference image. Using this dataset, we developed a novel multi-modal self-supervised pre-training strategy and applied it to pre-train a hybrid CNN-transformer model. Fine-tuning on the CaFFe benchmark showed that the pre-trained model outperforms its non-pre-trained counterpart, enabling more robust calving front segmentation and demonstrating the potential of data-efficient learning for SAR imagery.

Self-supervised pre-training for glacier calving front extraction from synthetic aperture radar imagery

Event date:

Webinar with Nora Gourmelon, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU). The factors influencing melt at glacier fronts facing the ocean remain an active area of research. To better understand these factors, it is essential to monitor changes in glacier calving fronts. Due to the importance of weather-dependent and seasonal variations, Synthetic Aperture Radar (SAR) is the preferred imaging method for such monitoring. In recent years, deep learning models have been developed to automate the extraction of calving front positions, however, their performance on SAR data remains suboptimal. Limited labeled data and high variability in SAR images hinder traditional supervised learning. Foundation models pre-trained on large, diverse datasets could provide robust feature representations that require minimal labeled data for fine-tuning on specific SAR tasks. However, in preliminary experiments, we found that the domain gap is too large for the task of extracting calving fronts from SAR imagery, and foundation models fine-tuned on the "CAlving Fronts and where to Find thEm" (CaFFe) benchmark dataset performed subpar. Therefore, we compiled an unlabeled dataset of Sentinel-1 SAR image sequences of Arctic glaciers, each associated with a single Sentinel-2 optical reference image. Using this dataset, we developed a novel multi-modal self-supervised pre-training strategy and applied it to pre-train a hybrid CNN-transformer model. Fine-tuning on the CaFFe benchmark showed that the pre-trained model outperforms its non-pre-trained counterpart, enabling more robust calving front segmentation and demonstrating the potential of data-efficient learning for SAR imagery.

Debiasing AI predictions for causal inference without fresh ground truth data

Event date:

Webinar with Markus Pettersson, Chalmers University of Technology. Machine learning models trained on Earth observation data, particularly satellite imagery, have recently shown impressive performance in predicting household-level wealth indices, potentially addressing chronic data scarcity in global development research. While these predictions exhibit strong predictive power, they inherently suffer from shrinkage toward the mean, resulting in attenuated estimates of causal treatment effects and thus limiting their utility in policy evaluations. Existing debiasing methods, such as Prediction-Powered Inference (PPI), require additional fresh ground-truth data at the downstream causal inference stage, severely restricting their applicability in data-poor environments. In this paper, we introduce and rigorously evaluate two novel correction methods—linear calibration correction and Tweedie's correction—that substantially reduce prediction bias without relying on newly collected labeled data. Our methods operate on out-of-sample predictions from pre-trained models, treating these models as black-box functions. Linear calibration corrects bias through a straightforward linear transformation derived from held-out calibration data, while Tweedie's correction leverages empirical Bayes principles to directly address shrinkage-induced biases by exploiting score functions derived from predicted outcomes. Through analytical exercises and experiments using Demographic and Health Survey (DHS) data, we demonstrate that both proposed methods outperform existing data-free approaches, can achieve significant reductions in attenuation bias and thus providing more accurate, actionable, and policy-relevant estimates. Our approach represents a generalizable, lightweight toolkit that enhances the reliability of causal inference when direct outcome measures are limited or unavailable.

2025 Nordic Workshop on AI for Climate Change

The 2025 Nordic Workshop on AI for Climate Change will gather researchers from the Nordics. This one-day, in-person workshop, will take place in Gothenburg, Sweden, May 13th 2025. The workshop will feature a mix of keynotes, oral presentations, and posters around the topics of AI for climate change, including AI for biodiversity and the green transition. The workshop will be a meeting point for a wide range of researchers from (primarily) around the Nordic countries.

2025 Nordic Workshop on AI for Climate Change

The 2025 Nordic Workshop on AI for Climate Change will gather researchers from the Nordics. This one-day, in-person workshop, will take place in Gothenburg, Sweden, May 13th 2025. The workshop will feature a mix of keynotes, oral presentations, and posters around the topics of AI for climate change, including AI for biodiversity and the green transition. The workshop will be a meeting point for a wide range of researchers from (primarily) around the Nordic countries.

2025 Nordic Workshop on AI for Climate Change

The 2025 Nordic Workshop on AI for Climate Change will gather researchers from the Nordics. This one-day, in-person workshop, will take place in Gothenburg, Sweden, May 13th 2025. The workshop will feature a mix of keynotes, oral presentations, and posters around the topics of AI for climate change, including AI for biodiversity and the green transition. The workshop will be a meeting point for a wide range of researchers from (primarily) around the Nordic countries.

2025 Nordic Workshop on AI for Climate Change

The 2025 Nordic Workshop on AI for Climate Change will gather researchers from the Nordics. This one-day, in-person workshop, will take place in Gothenburg, Sweden, May 13th 2025. The workshop will feature a mix of keynotes, oral presentations, and posters around the topics of AI for climate change, including AI for biodiversity and the green transition. The workshop will be a meeting point for a wide range of researchers from (primarily) around the Nordic countries.