Skip to content
opening-the-black-box:-explainable-ai-for-automated-bioturbation-analysis-in-cores-and-outcrops-–-scientific-reports

Opening the black box: explainable AI for automated bioturbation analysis in cores and outcrops – Scientific Reports

References

  1. Dalrymple, R. W. & James, N. P. Facies Models 4 (Geological Association of Canada, 2010).

  2. Pemberton, S. G. et al. Ichnology and Sedimentology of Shallow to Marginal Marine Systems: Ben Nevis and Avalon Reservoirs, Jeanne d’Arc Basin Vol. 15 (Geological Association of Canada, 2001).

  3. de Pires, R., Bonar, A., Coronado, D. D., Marfurt, K. & Nicholson, C. Deep convolutional neural networks as a geological image classification tool. Sediment. Record. 17, 4–9 (2019).

    Google Scholar 

  4. Yu, C. et al. Artificial intelligence in paleontology. Earth Sci. Rev. 252, 104765. https://doi.org/10.1016/j.earscirev.2024.104765 (2024).

    Google Scholar 

  5. Knutsen, E. M. & Konovalov, D. A. Accelerating segmentation of fossil CT scans through deep learning. Sci. Rep. 14, 20943. https://doi.org/10.1038/s41598-024-71245-1 (2024).

    Google Scholar 

  6. de Pires, R., Duarte, D., Nicholson, C., Slatt, R. & Marfurt, K. J. Petrographic microfacies classification with deep convolutional neural networks. Comput. Geosci. 142, 104481. https://doi.org/10.1016/j.cageo.2020.104481 (2020).

    Google Scholar 

  7. Ayranci, K., Yildirim, I. E., Waheed, U. & MacEachern, J. A. Deep learning applications in geosciences: insights into ichnological analysis. 11, 7736 (2021).

  8. Timmer, E., Knudson, C. & Gingras, M. Applying deep learning for identifying bioturbation from core photographs. AAPG Bull. 105, 631–638. https://doi.org/10.1306/08192019051 (2021).

    Google Scholar 

  9. Kikuchi, K. & Naruse, H. Abundance of trace fossil Phycosiphon incertum in core sections measured using a convolutional neural network. Sed. Geol. 461, 106570. https://doi.org/10.1016/j.sedgeo.2023.106570 (2024).

    Google Scholar 

  10. Qi, R., Zheng, Y., Yang, Y., Cao, C. C. & Hsiao, J. H. Explanation strategies in humans versus current explainable artificial intelligence: Insights from image classification. Br. J. Psychol. https://doi.org/10.1111/bjop.12714 (2024).

    Google Scholar 

  11. Rudin, C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1, 206–215. https://doi.org/10.1038/s42256-019-0048-x (2019).

    Google Scholar 

  12. Ghnemat, R., Alodibat, S. & Abu Al-Haija, Q. Explainable artificial intelligence (XAI) for deep learning based medical imaging classification. J. Imaging. 9, 177 (2023).

    Google Scholar 

  13. Patrício, C., Neves, J. C. & Teixeira, L. F. Explainable deep learning methods in medical image classification: A survey. ACM Comput. Surv. 56, 85. https://doi.org/10.1145/3625287 (2023).

  14. Shivhare, I., Jogani, V., Purohit, J. & Shrawne, S. C. Analysis of explainable artificial intelligence methods on medical image classification. In 2023 Third International Conference on Advances in Electrical, Computing, Communication and Sustainable Technologies (ICAECT), 1–5. https://doi.org/10.1109/ICAECT57570.2023.10118312 (2023).

  15. Wyatt, L. S., van Karnenbeek, L. M., Wijkhuizen, M., Geldof, F. & Dashtbozorg, B. Explainable artificial intelligence (XAI) for oncological ultrasound image analysis: a systematic review. Appl. Sci. 14, 8108 (2024).

    Google Scholar 

  16. Zheng, H. et al. Enhancing gastrointestinal submucosal tumor recognition in endoscopic ultrasonography: A novel multi-attribute guided contextual attention network. Expert Syst. Appl. 242, 122725. https://doi.org/10.1016/j.eswa.2023.122725 (2024).

    Google Scholar 

  17. Dahal, A. & Lombardo, L. Explainable artificial intelligence in geoscience: A glimpse into the future of landslide susceptibility modeling. Comput. Geosci. 176, 105364. https://doi.org/10.1016/j.cageo.2023.105364 (2023).

    Google Scholar 

  18. Jena, R. et al. Earthquake spatial probability and hazard estimation using various explainable AI (XAI) models at the Arabian peninsula. Remote Sens. Applications: Soc. Environ. 31, 101004. https://doi.org/10.1016/j.rsase.2023.101004 (2023).

    Google Scholar 

  19. Krell, E., Kamangir, H., Collins, W., King, S. A. & Tissot, P. Aggregation strategies to improve XAI for geoscience models that use correlated, high-dimensional rasters. Environ. Data Sci. 2, e45. https://doi.org/10.1017/eds.2023.39 (2023).

    Google Scholar 

  20. Taylor, A. M. & Goldring, R. Description and analysis of bioturbation and ichnofabric. J. Geol. Soc. 150, 141–148. https://doi.org/10.1144/gsjgs.150.1.0141 (1993).

    Google Scholar 

  21. Bromley, R. G. Trace Fossils: Biology, Taphonomy and Applications, 2nd ed, Vol. 361 (Chapman and Hall, 1996).

  22. Gingras, M. K., Bann, K. L., MacEachern, J. A., Waldron, J. & Pemberton, S. G. In Applied Ichnology Vol. Short Course Notes 52 (eds MacEachern, J. A. et al.) 1–26 (SEPM Society for Sedimentary Geology, 2007).

  23. Gingras, M. K. et al. In Developments in Sedimentology. Vol. 64, 463–505 (eds Knaust, D. & Bromley, R. G.) (Elsevier, 2012).

  24. Carmona, N. B., Buatois, L. A., Ponce, J. J. & Mángano, M. G. Ichnology and sedimentology of a tide-influenced delta, Lower Miocene Chenque Formation, Patagonia, Argentina: Trace-fossil distribution and response to environmental stresses. Palaeogeogr., Palaeoclimatol. Palaeoecol. 273, 75–86. https://doi.org/10.1016/j.palaeo.2008.12.003 (2009).

    Google Scholar 

  25. MacEachern, J. A., Bann, K. L., Bhattacharya, J. P. & Howell, C. D. J. In River Deltas – Concepts, Models, and Examples Vol. Special Publication 83 (eds Giosan, L. & Bhattacharya, J. P.) 49–85 (2005).

  26. MacEachern, J. A. & Bann, K. L. Departures from the archetypal deltaic ichnofacies. Geol. Soc. Lond. Special Publications. 522, 175–213. https://doi.org/10.1144/SP522-2022-56 (2023).

    Google Scholar 

  27. MacEachern, J. A., Bann, K. L., Pemberton, S. G. & Gingras, M. K. In Applied Ichnology Vol. Short Course Notes No. 52 (eds MacEachern, J. A. et al.) 27–64 (SEPM, 2007).

  28. Gani, M. R., Bhattacharya, J. P. & MacEachern, J. A. In Applied Ichnology Vol. Short Course Notes 52 (eds MacEachern, J. A. et al.) 209–225 (SEPM, Society of Sedimentary Geology, 2007).

  29. Borys, K. et al. Explainable AI in medical imaging: An overview for clinical practitioners – Saliency-based XAI approaches. Eur. J. Radiol. 162, 110787. https://doi.org/10.1016/j.ejrad.2023.110787 (2023).

    Google Scholar 

  30. Selvaraju, R. R. et al. Grad-CAM: Visual explanations from deep networks via gradient-based localization. Int. J. Comput. Vision. 128, 336–359. https://doi.org/10.1007/s11263-019-01228-7 (2020).

    Google Scholar 

  31. Adebayo, J. et al. Sanity checks for saliency maps. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, 9525–9536 (2018).

Download references

colind88

Back To Top