Active Learning with Deep Autoencoders for Seismic Facies Interpretation

This is a Preprint and has not been peer reviewed. The published version of this Preprint is available: https://doi.org/10.1190/geo2022-0353.1. This is version 1 of this Preprint.

Add a Comment

You must log in to post a comment.


Comments

There are no comments or no comments have been made public for this article.

Downloads

Download Preprint

Authors

Ahmad Mustafa, Ghassan AlRegib

Abstract

Machine learning-assisted seismic interpretation tasks require large quantities of labeled data annotated by expert interpreters, which is a costly and time-consuming process. Where existing works to minimize dependence on labeled data assume the data annotation process to already be completed, active learning---a field of machine learning---works by selecting the most important training samples for the interpreter to annotate in real time simultaneously with the training of the interpretation model itself, resulting in high levels of performance with fewer labeled data samples than otherwise possible. Where active learning has been significantly performed for classification tasks with respect to natural images, there exist very little to no works for dense prediction tasks in geophysics like interpretation. We develop a unique and first-of-a-kind active learning framework for seismic facies interpretation using the manifold learning properties of deep autoencoders. By jointly learning representations for supervised and unsupervised tasks and then ranking unlabeled samples by their nearness to the data manifold, we are able to identify the most relevant training samples to be labeled by the interpreter in each training. On the popular F3 dataset, we obtain close to 10 percentage point difference in terms of interpretation accuracy between the proposed method and the baseline with only three fully annotated seismic sections.

DOI

https://doi.org/10.31223/X5XD32

Subjects

Geophysics and Seismology

Keywords

machine learning, interpretation, active learning

Dates

Published: 2023-04-26 11:57

License

CC BY Attribution 4.0 International

Additional Metadata

Data Availability (Reason not available):
Data and Codes are available and may be obtained at the GitHub link provided in the preprint.