Skip to main content
Global deep learning model for delineation of optically shallow and optically deep water in Sentinel-2 imagery

Global deep learning model for delineation of optically shallow and optically deep water in Sentinel-2 imagery

This is a Preprint and has not been peer reviewed. The published version of this Preprint is available: https://doi.org/10.1016/j.rse.2024.114302. This is version 1 of this Preprint.

Add a Comment

You must log in to post a comment.


Comments

There are no comments or no comments have been made public for this article.

Downloads

Download Preprint

Authors

Galen Richardson , Neve Foreman, Anders Knudby, Yulun Wu, Yiwen Lin

Abstract

In aquatic remote sensing, algorithms commonly used to map environmental variables rely on assumptions regarding the optical environment. Specifically, some algorithms assume that the water is optically deep, i.e., that the influence of bottom reflectance on the measured signal is negligible. Other algorithms assume the opposite and are based on an estimation of the bottom-reflected part of the signal. These algorithms may suffer from reduced performance when the relevant assumptions are not met. To address this, we introduce a general purpose tool that automates the delineation of optically deep and optically shallow waters in Sentinel-2 imagery. This allows the application of algorithms for satellite-derived bathymetry, bottom habitat identification, and water-quality mapping to be limited to the environments for which they are intended, and thus to enhance the accuracy of derived products. We sampled 440 Sentinel-2 images from a wide range of coastal locations, covering all continents and latitudes, and manually annotated 1000 points in each image as either optically deep or optically shallow by visual interpretation. This dataset was used to train six machine learning classification models - Maximum Likelihood, Random Forest, ExtraTrees, AdaBoost, XGBoost, and deep neural networks utilizing both the original top-of-atmosphere reflectance and atmospherically corrected datasets. The models were trained on features including kernel means and standard deviations for each band, as well as geographical location. A deep neural network emerged as the best model, with an average accuracy of 82.3% across the two datasets and fast processing time. Higher accuracies can be achieved by removing pixels with intermediate probability scores from the predictions. We made this model publicly available as a Python package. This represents a substantial step toward automatic delineation of optically deep and shallow water in Sentinel-2 imagery, which allows the aquatic remote sensing community and downstream users to ensure that algorithms, such as those used in satellite-derived bathymetry or for mapping bottom habitat or water quality, are applied only to the environments for which they are intended.

DOI

https://doi.org/10.31223/X5D742

Subjects

Environmental Indicators and Impact Assessment, Environmental Monitoring, Environmental Sciences, Natural Resources and Conservation, Sustainability, Water Resource Management

Keywords

Python Tool, Earth Observation, Remote Sensing of Environment, remote sensing, Explainable AI, xAI, coastal ecosystems, Coastal Remote Sensing, Random Forest, Recursive Feature Elimination, sentinel-2, Seafloor Reflection, Neural Network, Optically Deep Water, Optically Shallow Water, Deep learning, machine learning, Aquatic Remote Sensing, bathymetry

Dates

Published: 2025-10-15 00:47

License

CC BY Attribution 4.0 International

Additional Metadata

Conflict of interest statement:
None

Data Availability (Reason not available):
The research data and tool can be accessed on GitHub at https://github.com/yulunwu8/Optically-Shallow-Deep. We have not shared the code, but it has been described in sufficient detail in the paper to be replicable