This is a Preprint and has not been peer reviewed. This is version 1 of this Preprint.
Downloads
Authors
Abstract
This study explores the integration of Explainable Artificial Intelligence (XAI) into environmental governance, addressing critical challenges in transparency, public trust and regulatory compliance. As AI technologies increasingly influence decision-making in environmental management, ensuring equitable and ethical practices has become paramount. By examining key regulatory frameworks, including the EU’s AI Act and the International Tribunal for the Law of the Sea’s 2024 advisory opinion, this research extends ITLOS's principles beyond marine environments to address broader issues such as air quality, water management and sustainable resource allocation. A novel framework is proposed to harmonize global standards while bridging digital literacy with regulatory governance, fostering accountability and inclusivity in resource management. Additionally, a participatory model is presented to empower communities, enabling them to engage with and challenge AI-driven decisions effectively. This work contributes to the development of ethical, explainable, and transparent AI practices, offering multidisciplinary insights for policymakers, researchers, and practitioners to advance sustainable and inclusive environmental governance.
DOI
https://doi.org/10.31223/X57Q63
Subjects
Social and Behavioral Sciences
Keywords
Explainable Artificial Intelligence (XAI) Environmental Governance Data Analytics Sustainability Algorithmic Transparency
Dates
Published: 2024-11-27 07:41
Last Updated: 2024-11-27 15:41
There are no comments or no comments have been made public for this article.