Experts highlight the value of explainable AI in geoscience

Timon Meyer, Fraunhofer-Institut für Nachrichtentechnik, Heinrich-Hertz-Institut, HHI
The value of explainable artificial intelligence (XAI). Credit: Nature Geoscience (2025). doi:10.1038/s41561-025-01639-x
A new paper published in Nature Geoscience is an expert at Fraunhofer Heinrich-Hertz-Institut (HHI), who advocates the use of explainable artificial intelligence (XAI) methods in Geoscience.
Researchers aim to promote the wider adoption of AI in geoscience (e.g. weather forecasting) by uncovering the decision process for AI models and promoting confidence in the outcomes. Fraunhofer HHI, the world leader at Xai Research, coordinates non-supporting global initiatives that lay the foundation for international standards in the use of AI for disaster management.
AI offers an unparalleled opportunity to analyze data and solve complex nonlinear problems in geoscience. However, the greater the complexity of AI models, the more interpretable it may be. A highly safe situation, such as a disaster, a lack of understanding of how the model works, and a lack of confidence in its outcomes, can hinder its implementation.
The XAI methods address this challenge by providing insight into AI systems and identifying data-related or model-related issues. For example, XAI can detect “incorrect” correlations in training data. This is independent of the specific tasks of the AI system that can distort the results.
“Reliability is important for AI adoption. XAI acts as a magnifying lens, allowing researchers, policymakers and security specialists to analyze data through the “eyes” of the model, and provides a dominant predictive strategy and desirable Make sure you understand behavior that doesn’t exist. “Professor Wojciech Samek, Head of Artificial Intelligence at Fraunhofer HHI, explains.
The authors of this paper analyzed 2.3 million ARXIV summaries of geoscience-related articles published between 2007 and 2022. They found that only 6.1% of XAI mentioned papers. Given its immeasurable possibilities, the author sought to identify challenges that would prevent geoscientists from adopting the XAI method.
Focusing on natural disasters, the author investigated use cases curated by the International Communications Union of AI for Natural Disaster Management/World Weather Organization/UN Environment Focus Group. After investigating the researchers involved in these use cases, the authors identified important motivations and hurdles.
Motivations include building trust in AI applications, gaining insights from data, and increasing the efficiency of AI systems. Also, most participants used Xai to analyze the underlying processes of the model. Conversely, those who do not use Xai cited the effort, time and resources needed as a barrier.
“Xai has a clear value added to geoscience. It improves the underlying dataset and AI models, identifies physical relationships captured by the data, and builds trust among end users. Geoscience We hope that if the person understands this value, they will become part of AI. “Pipeline,” and Monique, Innovation Manager at Fraunhofer HHI and Chairman of the Global Initiative on Natural Disaster Resilience through AI Solutions. Dr. Kuglitch says.
To support the adoption of Xai in geoscience, this paper provides four practical recommendations:
It drives stakeholders and end-users’ demand for an explanatory model. Building educational resources for Xai users. It covers the features, explanations and limitations of various methods. It will connect geoscience and AI experts to build international partnerships to promote knowledge sharing. It supports integration with streamlined workflows for standardization and interoperability of AI in natural disasters and other geoscience domains.
In addition to Fraunhofer HHI experts Monique Kuglitsch, Ximeng Cheng, Jackie Ma and Warzicek Samek, this paper is based on Jesper Dramsch, Miguel-Angel Fernándeztorres, Andrea Toreti, and Rust em Arif Albaiar, Lorenzo navan, rudhan Written by Rustem Arif Vusha, Rustem Arif Ghuswaiar. Anirudh Koul, Raghavan Muthuregunathan, Arthur Hrast Essenfelder.
More information: Jesper Sören Dramschet al, Explanational Possibility may promote trust in Geoscience, Nature Geoscience (2025) in artificial intelligence. doi:10.1038/s41561-025-01639-x
Provided by Fraunhofer-Institut Für Nachrichtentechnik, Heinrich-Hertz-Institut, HHI
Quote: Explainable Earth Sciences Retrieved February 5, 2025 from https://news/2025-02-experts-underscore-ai-ai-osciences.html on February 5, 2025 Emphasizing the value of AI (February 5, 2025)
This document is subject to copyright. There is no part that is reproduced without writing permission, apart from fair transactions for private research and research purpose. Content is provided only by information.