Explainable Machine Learning for Hydrocarbon Prospect Risking

This is a Preprint and has not been peer reviewed. The published version of this Preprint is available: https://doi.org/10.1190/geo2022-0594.1. This is version 1 of this Preprint.

Add a Comment

You must log in to post a comment.


Comments

There are no comments or no comments have been made public for this article.

Downloads

Download Preprint

Authors

Ahmad Mustafa, Ghassan AlRegib, Klaas Koster

Abstract

Hydrocarbon prospect risking integrates information from multiple geophysical data and modalities to arrive at a probability of success for a given prospect. The DHI database of drilled prospects gathers data from prospects drilled around the world in multiple geologic settings in one central knowledge base. A major goal of interest to geophysicists is to understand the impact of various seismic amplitude anomalies, that are interpreted as direct hydrocarbon indicators, on the risking process. The individual correlation-based analysis typically carried out for this purpose misses out on complex feature interactions governing the physical phenomena. Data-driven machine learning techniques have the potential to sift through large, multidimensional datasets to learn mappings from feature spaces to outcome classes. LIME is a model explainability technique that explains decisions by black-box models by locally approximating their behavior. We propose a novel method whereby LIME is used in conjunction with various machine learning models to learn mappings from feature spaces in the DHI database to respective prospect outcomes. Consequently, we are able to highlight seismic amplitude anomalies considered most important by the models to the risking process and we show that our insights agree with geophysical intuition. Moreover, we use LIME explanations to demonstrate a case study of bias detection with machine learning models for the application of prospect risking. A limitation with LIME is that it only explains model behavior around solitary datapoints. Towards this end, we propose novel metrics summarizing a model's global understanding of a dataset by aggregating local explanations over individual examples. To the best of our knowledge, this is the first work using explainable machine learning for the purpose of bringing novel insights to prospect risk assessment.

DOI

https://doi.org/10.31223/X5JD5D

Subjects

Geophysics and Seismology, Statistics and Probability

Keywords

prospect risking, machine learning, Interpretability, trust

Dates

Published: 2023-06-20 10:10

License

CC BY Attribution 4.0 International

Additional Metadata

Data Availability (Reason not available):
Data is proprietary