Skip to main content
Remote sensing and deep learning for standing dead-tree detection and mapping: A review of advances, challenges, and future directions

Remote sensing and deep learning for standing dead-tree detection and mapping: A review of advances, challenges, and future directions

This is a Preprint and has not been peer reviewed. This is version 1 of this Preprint.

Add a Comment

You must log in to post a comment.


Comments

There are no comments or no comments have been made public for this article.

Downloads

Download Preprint

Authors

Anwarul Islam Chowdhury , Mirela Beloiu, Teja Kattenborn, Clemens Mosig , Mete Ahishali, Mikko Vastaranta, Eetu Puttonen, Eija Honkavaara, Langning Huo, Md. Jamal Uddin, Verena C. Griess, Anton Kuzmin, Yan Cheng, Samuli Junttila

Abstract

Standing dead trees are visible indicators of recent tree mortality and an important transitional component linking forest disturbance to future lying deadwood, habitat availability, and carbon storage. As drought, insect outbreaks, pathogens, and climate extremes intensify tree mortality worldwide, scalable methods are needed to detect and map standing dead trees consistently across forest landscapes. Recent advances in high-resolution remote sensing and computer-vision-based deep learning, including object detection, semantic segmentation, instance segmentation, and transformer-based models, are making large-scale standing dead-tree detection increasingly feasible. This review synthesizes 38 studies from 2019 to 2026 that apply deep learning to optical and LiDAR data acquired from UAV-borne, airborne, satellite, and multi-sensor remote sensing platforms for standing dead-tree detection and mapping. We identify five major advances: 1) object-detection models have become central for identifying dead trees, especially where mortality is spatially scattered and dead trees occur at low density, with transformer-based frameworks emerging as a promising development; 2) U-Net derivatives and hybrid or ensemble models are widely used for segmentation in dense canopies; 3) multi-sensor fusion can improve detection robustness, particularly where spectral and structural cues are complementary; 4) transfer learning and domain adaptation are important for scaling across regions, although cross-biome generalization remains limited by differences in forest structure, dead-tree appearance, and sensor characteristics; and 5) model comparability is constrained by inconsistent annotations, varying definitions of standing dead trees, and the lack of standardized benchmark datasets. Despite substantial progress, operational deployment remains limited by canopy occlusion, class imbalance, annotation variability, and uncertain transferability. By linking ecological monitoring needs with recent methodological advances, this review outlines a pathway toward scalable, transferable, and benchmarked deep-learning systems for standing dead-tree monitoring, supported by standardized datasets, domain-invariant and self-supervised learning, and open databases such as deadtrees.earth.

DOI

https://doi.org/10.31223/X5BZ05

Subjects

Forest Management, Forest Sciences, Other Forestry and Forest Sciences

Keywords

Tree mortality, Deadwood, Forest health, UAV, LiDAR, Multi-sensor fusion

Dates

Published: 2026-05-07 20:26

Last Updated: 2026-05-07 20:26

License

CC BY Attribution 4.0 International

Additional Metadata

Conflict of interest statement:
None

Metrics

Views: 61

Downloads: 1