Skip to content
source-camera-attribution-using-a-rule-based-explainable-convolutional-neural-network-–-scientific-reports

Source camera attribution using a rule-based explainable convolutional neural network – Scientific Reports

  • Article
  • Open access
  • Published:
  • Tahereh Nayerifard1,
  • Haleh Amintoosi1 &
  • Abbas Ghaemi Bafghi1 

Scientific Reports , Article number:  (2026) Cite this article

We are providing an unedited version of this manuscript to give early access to its findings. Before final publication, the manuscript will undergo further editing. Please note there may be errors present which affect the content, and all legal disclaimers apply.

Subjects

Abstract

In recent years, there has been a push towards adopting artificial intelligence (AI) models in digital forensics (DF), particularly deep learning (DL) models. While these models assist DF experts, their lack of transparency raises concerns about reliability. Although eXplainable Artificial Intelligence (XAI) has progressed, current methods remain limited for DF applications. Existing visual XAI techniques do not provide sufficient clarity for challenging image forensics tasks such as Source Camera Identification (SCI), nor do they offer mechanisms to assess whether a model’s decision is correct. Most methods simply highlight influential regions without enabling examiners to validate the decision itself. Rule-based explainability is a promising strategy for increasing transparency, yet deploying it on real-world Convolutional Neural Networks (CNNs) is still challenging. Prior studies remain largely experimental and often require modifying the model to extract rules, conflicting with the integrity requirements of DF workflows. To address these gaps, this paper introduces a framework to make CNN models used in the analysis stage of digital forensics explainable. The framework, by following three fundamental steps—layers trace detection, layers majority voting, and rule extraction—provides structured and transparent visual output, and rule-based textual explainability that is understandable to the user. Based on this, the first explainable Source Camera Identification (SCI) model is introduced which is a challenging DF task to make it explainable. The explainable output allows for the rejection or confirmation of the main model’s prediction based on the decisions of the layers and compliance with the principle of integrity to the DF examiner. In addition, with the identification of 27 out of 37 incorrect predictions by the base model, the precision of the model was improved from 97.33% to 99.2%.

Data availability

The source code is available on GitHub.

References

  1. Nayerifard, T., Amintoosi, H., Gaemi Bafghi, A. and Dehghantanha, A. Machine learning in digital forensics: A systematic literature review. https://doi.org/10.48550/arXiv.2306.04965 (2023).

  2. Hall, S. W., Sakzad, A. & Choo, K. R. Explainable artificial intelligence for digital forensics. WIREs Forensic Sci. 4, 1–11. https://doi.org/10.1002/wfs2.1434 (2022).

    Google Scholar 

  3. Fähndrich, J. et al. Digital forensics and strong AI: A structured literature review. Forensic Sci. Int. Digit. Investig. https://doi.org/10.1016/j.fsidi.2023.301617 (2023).

    Google Scholar 

  4. Solanke, A. A. Explainable digital forensics AI: Towards mitigating distrust in AI-based digital forensics analysis using interpretable models. Forensic Sci. Int. Digit. Investig. https://doi.org/10.1016/j.fsidi.2022.301403 (2022).

    Google Scholar 

  5. Ali, S. et al. Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence. Inf. Fusion https://doi.org/10.1016/j.inffus.2023.101805 (2023).

    Google Scholar 

  6. IEEE Standards Association, Guide for an Architectural Framework for Explainable Artificial Intelligence. IEEE standard 2894 https://doi.org/10.1109/IEEESTD.2024.10659410 (2024).

  7. Hall, S. W., Sakzad, A. & Minagar, S. A proof of concept implementation of explainable artificial intelligence (XAI) in digital forensics. In Network and System Security Conference https://doi.org/10.1007/978-3-031-23020-2_4 (2022).

  8. Lo, W., Kulatilleke, G., Sarhan, M., Layeghy, S. & Portmann, M. XG-BoT: An explainable deep graph neural network for botnet detection and forensics. Internet Things https://doi.org/10.1016/j.iot.2023.100747 (2023).

    Google Scholar 

  9. Henrique, S. et al. Forensic Science International: Synergy Deepfake forensics analysis: An explainable hierarchical ensemble of weakly supervised models. IForensic Sci. Int. Synerg. https://doi.org/10.1016/j.fsisyn.2022.100217 (2022).

    Google Scholar 

  10. Ge, W. et al. MetaCluster: A universal interpretable classification framework for cybersecurity. IEEE Trans. Inf. Forensics Security 19, 3829–3843. https://doi.org/10.1109/TIFS.2024.3372808 (2024).

    Google Scholar 

  11. Floreale, G. et al. Automated processing of eXplainable Artificial Intelligence outputs in deep learning models for fault diagnostics of large infrastructures. Eng. Appl. Artif. Intell. 149, 110518 (2025).

    Google Scholar 

  12. Lundberg, S. M. and Lee, S. I. A unified approach to interpreting model predictions. IAdv. Neural Inf. Process. Syst. Preprint at https://doi.org/10.48550/arXiv.1705.07874 (2017).

  13. Nayerifard, T., Amintoosi, H. & Gaemi Bafghi, A. A robust PRNU-based source camera attribution with convolutional neural networks. J. Supercomput. https://doi.org/10.1007/s11227-024-06579-8 (2025).

    Google Scholar 

  14. Shullani, D., Fontani, M., Iuliani, M., Shaya, OAl. and Piv, A. VISION: A video and image dataset for source identification. EURASIP J. Inf. Secur. https://doi.org/10.1186/s13635-017-0067-2 (2017)

  15. Selvaraju, R. R. et al. Grad-CAM: Visual explanations from deep networks via gradient-based localization. IInt. J. Comput. Vis. 128, 336–359. https://doi.org/10.1109/ICCV.2017.74 (2020).

    Google Scholar 

  16. Sundararajan, M., Taly, A. & Yan, Q. Axiomatic attribution for deep networks. In 34th International Conference on Machine Learning, ICML 7, 5109–5118 https://doi.org/10.48550/arXiv.1703.01365 (2017).

  17. Ghai, B., Vera Liao, Q., Zhang, Y., Bellamy, R. and Mueller, K. Explainable Active Learning (XAL): Toward AI Explanations as Interfaces for Machine Teachers Proc. ACM Human-Computer Interaction 4, 1–28, https://doi.org/10.1145/3432934 (2020).

  18. Rasmussen, C. B., Kirk, K. & Moeslund, T. B. The challenge of data annotation in deep learning—A case study on whole plant corn silage. Sensors https://doi.org/10.3390/s22041596 (2022).

    Google Scholar 

Download references

Acknowledgements

Figures in this manuscript were drawn or modified by T.N. using draw.io (https://app.diagrams.net)

Author information

Authors and Affiliations

  1. Department of Computer Engineering, Faculty of Engineering, Ferdowsi University of Mashhad, Mashhad, Iran

    Tahereh Nayerifard, Haleh Amintoosi & Abbas Ghaemi Bafghi

Authors

  1. Tahereh Nayerifard
  2. Haleh Amintoosi
  3. Abbas Ghaemi Bafghi

Contributions

T.N. implemented the proposed idea and extracted the results. T. N also prepared the first draft. H.A. revised the paper and supervised the whole process. H.A. and A.GhB. helped with the formation of the basic idea.

Corresponding author

Correspondence to Haleh Amintoosi.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Nayerifard, T., Amintoosi, H. & Ghaemi Bafghi, A. Source camera attribution using a rule-based explainable convolutional neural network. Sci Rep (2026). https://doi.org/10.1038/s41598-026-40387-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1038/s41598-026-40387-9

Keywords

colind88

Back To Top