March 18, 2025

ikayaniaamirshahzad@gmail.com

Towards unveiling sensitive and decisive patterns in explainable AI with a case study in geometric deep learning


  • Butler, K. T., Davies, D. W., Cartwright, H., Isayev, O. & Walsh, A. Machine learning for molecular and materials science. Nature 559, 547–555 (2018).

    Article 
    MATH 

    Google Scholar
     

  • Carleo, G. et al. Machine learning and the physical sciences. Rev. Mod. Phys. 91, 045002 (2019).

    Article 
    MATH 

    Google Scholar
     

  • Zhong, S. et al. Machine learning: new ideas and tools in environmental science and engineering. Environ. Sci. Technol. 55, 12741–12754 (2021).

    MATH 

    Google Scholar
     

  • Bergen, K. J., Johnson, P. A., de Hoop, M. V. & Beroza, G. C. Machine learning for data-driven discovery in solid earth geoscience. Science 363, eaau0323 (2019).

    Article 

    Google Scholar
     

  • Qu, H. & Gouskos, L. Jet tagging via particle clouds. Phys. Rev. D 101, 056019 (2020).

    Article 

    Google Scholar
     

  • Ju, X. et al. Performance of a geometric deep learning pipeline for HL-LHC particle tracking. Eur. Phys. J. C 81, 876 (2021).

    Article 
    MATH 

    Google Scholar
     

  • Gainza, P. et al. Deciphering interaction fingerprints from protein molecular surfaces using geometric deep learning. Nat. Methods 17, 184–192 (2020).

    Article 

    Google Scholar
     

  • Stärk, H., Ganea, O., Pattanaik, L., Barzilay, R. & Jaakkola, T. Equibind: geometric deep learning for drug binding structure prediction. In International Conference on Machine Learning 20503–20521 (PMLR, 2022).

  • Liao, Y.-L., Wood, B. M., Das, A. & Smidt, T. EquiformerV2: improved equivariant transformer for scaling to higher-degree representations. In The Twelfth International Conference on Learning Representations (2024).

  • Zhou, G. et al. Uni-Mol: a universal 3D molecular representation learning framework. In The Eleventh International Conference on Learning Representations (2023).

  • Schütt, K. et al. Schnet: a continuous-filter convolutional neural network for modeling quantum interactions. Adv. Neural Inf. Process. Syst. 30, 992–1002 (2017).

    MATH 

    Google Scholar
     

  • Jing, B., Eismann, S., Suriana, P., Townshend, R. J. L. & Dror, R. O. Learning from protein structure with geometric vector perceptrons. In 9th International Conference on Learning Representations (2021).

  • Bogatskiy, A. et al. Lorentz group equivariant neural network for particle physics. In International Conference on Machine Learning 992–1002 (PMLR, 2020).

  • Sanchez-Gonzalez, A. et al. Learning to simulate complex physics with graph networks. In International Conference on Machine Learning 8459–8468 (PMLR, 2020).

  • Doshi-Velez, F. & Kim, B. Towards a rigorous science of interpretable machine learning. Preprint at https://arxiv.org/abs/1702.08608 (2017).

  • Madsen, A., Reddy, S. & Chandar, S. Post-hoc interpretability for neural NLP: a survey. ACM Comput. Surveys 55, 1–42 (2022).

    Article 
    MATH 

    Google Scholar
     

  • Danilevsky, M. et al. A survey of the state of explainable AI for natural language processing. In Proc. 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, AACL/IJCNLP (eds Wong, K. et al.) 447–459 (Association for Computational Linguistics, 2020).

  • Jacovi, A. & Goldberg, Y. Towards faithfully interpretable NLP systems: how should we define and evaluate faithfulness? In Proc. 58th Annual Meeting of the Association for Computational Linguistics (eds Jurafsky, D. et al.) 4198–4205 (Association for Computational Linguistics, 2020).

  • Linardatos, P., Papastefanopoulos, V. & Kotsiantis, S. Explainable AI: a review of machine learning interpretability methods. Entropy 23, 18 (2020).

    Article 
    MATH 

    Google Scholar
     

  • Yuan, H., Yu, H., Wang, J., Li, K. & Ji, S. On explainability of graph neural networks via subgraph explorations. In International Conference on Machine Learning 12241–12252 (PMLR, 2021).

  • Yuan, H., Yu, H., Gui, S. & Ji, S. Explainability in graph neural networks: a taxonomic survey. IEEE Trans. Pattern Anal. Mach. Intell. 45, 5782–5799 (2022).

    MATH 

    Google Scholar
     

  • Vu, M. & Thai, M. T. PGM-Explainer: probabilistic graphical model explanations for graph neural networks. Adv. Neural Inf. Process. Syst. 33, 12225–12235 (2020).

    MATH 

    Google Scholar
     

  • Miao, S., Liu, M. & Li, P. Interpretable and generalizable graph learning via stochastic attention mechanism. In International Conference on Machine Learning 15524–15543 (PMLR, 2022).

  • Luo, D. et al. Parameterized explainer for graph neural network. Adv. Neural Inf. Process Syst. 33, 19620–19631 (2020).

    MATH 

    Google Scholar
     

  • Arrieta, A. B. et al. Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020).

    Article 
    MATH 

    Google Scholar
     

  • Rudin, C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1, 206–215 (2019).

    Article 
    MATH 

    Google Scholar
     

  • Laugel, T., Lesot, M.-J., Marsala, C., Renard, X. & Detyniecki, M. The dangers of post-hoc interpretability: unjustified counterfactual explanations. In Proc. 28th International Joint Conference on Artificial Intelligence 2801–2807 (AAAI, 2019).

  • Amara, K. et al. GraphFramEx: towards systematic evaluation of explainability methods for graph neural networks. In Learning on Graphs Conference, Proceedings of Machine Learning Research (eds Rieck, B. & Pascanu, R.) Vol. 198, 44 (PMLR, 2022).

  • Sanchez-Lengeling, B. et al. Evaluating attribution for graph neural networks. Adv. Neural Inf. Process. Syst. 33, 5898–5910 (2020).

    MATH 

    Google Scholar
     

  • Longa, A. et al. Explaining the explainers in graph neural networks: a comparative study. ACM Comput. Surv. 57, 1–37 (2025).

    Article 
    MATH 

    Google Scholar
     

  • Chen, J., Amara, K., Yu, J. & Ying, R. Generative explanation for graph neural network: methods and evaluation. IEEE Data Eng. Bull. 46, 64–79 (2023).

    MATH 

    Google Scholar
     

  • Adebayo, J., Muelly, M., Abelson, H. & Kim, B. Post hoc explanations may be ineffective for detecting unknown spurious correlation. In International Conference on Learning Representations (2021).

  • Slack, D., Hilgard, A., Singh, S. & Lakkaraju, H. Reliable post hoc explanations: modeling uncertainty in explainability. Adv. Neural Inf. Process. Syst. 34, 9391–9404 (2021).

    MATH 

    Google Scholar
     

  • Serrano, S. & Smith, N. A. Is attention interpretable? In Proc. 57th Conference of the Association for Computational Linguistics (eds Korhonen, A. et al.) Vol. 1, 2931–2951 (Association for Computational Linguistics, 2019).

  • Wiegreffe, S. & Pinter, Y. Attention is not not explanation. In Proc. 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (eds Inui, K. et al.) 11–20 (Association for Computational Linguistics, 2019).

  • Mohankumar, A. K. et al. Towards transparent and explainable attention models. In Proc. 58th Annual Meeting of the Association for Computational Linguistics (eds Jurafsky, D. et al.) 4206–4216 (Association for Computational Linguistics, 2020).

  • Bai, B. et al. Why attentions may not be interpretable? In Proc. 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining 25–34 (Association for Computing Machinery, 2021).

  • Wang, C., Han, B., Patel, B. & Rudin, C. In pursuit of interpretable, fair and accurate machine learning for criminal recidivism prediction. J. Quant. Criminol. 39, 519–581 (2023).

    Article 

    Google Scholar
     

  • Li, Y., Zhou, J., Verma, S. & Chen, F. A survey of explainable graph neural networks: taxonomy and evaluation metrics. Preprint at https://arxiv.org/abs/2207.12599 (2022).

  • Wu, B. et al. Trustworthy graph learning: reliability, explainability, and privacy protection. In Proc. 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining 4838–4839 (Association for Computing Machinery, 2022).

  • Kakkad, J., Jannu, J., Sharma, K., Aggarwal, C. & Medya, S. A survey on explainability of graph neural networks. IEEE Data Eng. Bull. 46, 35–63 (2023).


    Google Scholar
     

  • Zhang, H. et al. Trustworthy graph neural networks: aspects, methods, and trends. Proc. IEEE 112, 97–139 (2024).

    Article 
    MATH 

    Google Scholar
     

  • Miao, S., Luo, Y., Liu, M. & Li, P. Interpretable geometric deep learning via learnable randomness injection. In The Eleventh International Conference on Learning Representations (2023).

  • Zhu, J. Graph-COM/xgdl: v0.0.4 (v0.0.4). Zenodo https://doi.org/10.5281/zenodo.13994446 (2024).

  • Ying, Z., Bourgeois, D., You, J., Zitnik, M. & Leskovec, J. GNNExplainer: generating explanations for graph neural networks. Adv. Neural Inf. Process. Syst. 32, 9240–9251 (2019).


    Google Scholar
     

  • Ai, X. et al. A common tracking software project. Comput. Softw. Big Sci. 6, 8 (2022).

  • McCloskey, K., Taly, A., Monti, F., Brenner, M. P. & Colwell, L. J. Using attribution to decode binding mechanism in neural network models for chemistry. Proc. Natl. Acad. Sci. 116, 11624–11629 (2019).

    Article 

    Google Scholar
     

  • Chen, J. & Ying, R. TempME: towards the explainability of temporal graph neural networks via motif discovery. Adv. Neural Inf. Process. Syst. 36, 29005–29028 (2024).


    Google Scholar
     

  • Satorras, V. G., Hoogeboom, E. & Welling, M. E(n) equivariant graph neural networks. In Proc. 38th International Conference on Machine Learning (eds Meila, M. & Zhang, T.) 9323–9332 (PMLR, 2021).

  • Wang, Y. et al. Dynamic graph CNN for learning on point clouds. ACM Trans. Graph. 38, 1–12 (2019).

    MATH 

    Google Scholar
     

  • Zhao, H., Jiang, L. Jia, J., Torr, P. & Koltun, V. Point transformer. In 2021 IEEE/CVF International Conference on Computer Vision (ICCV) 16239–16248 (IEEE, 2021).

  • Avanti, S., Greenside, P. & Kundaje, A. Learning important features through propagating activation differences. In Proc. 34th International Conference on Machine Learning (eds Precup, D. & Teh, W. Y.) Vol. 70, 3145–3153 (PMLR, 2017).

  • Chattopadhay, A., Sarkar, A. Howlader, P. & Balasubramanian, V. N. Grad-cam++: generalized gradient-based visual explanations for deep convolutional networks. In 2018 IEEE Winter Conference on Applications of Computer Vision (WACV) 839–847 (IEEE, 2018).

  • Sundararajan, M., Taly, A. & Yan, Q. Axiomatic attribution for deep networks. In Proc. 34th International Conference on Machine Learning (eds Precup, D. & Teh, Y. W.) Vol. 70, 3319–3328 (PMLR, 2017).

  • Schnake, T. et al. Higher-order explanations of graph neural networks via relevant walks. IEEE Trans. Pattern Anal. Mach. Intell. 44, 7581–7596 (2021).

    Article 
    MATH 

    Google Scholar
     

  • Yu, J., Cao, J. & He, R. Improving subgraph recognition with variational graph information bottleneck. In Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition 19396–19405 (IEEE, 2022).

  • Chen, Y. et al. Learning causally invariant representations for out-of-distribution generalization on graphs. Adv. Neural Inf. Process. Syst. 35, 22131–22148 (2022).


    Google Scholar
     

  • Ranjan, E., Sanyal, S. & Talukdar, P. ASAP: adaptive structure aware pooling for learning hierarchical graph representations. Proc. AAAI Conf. Artif. Intell. 34, 5470–5477 (2020).

    MATH 

    Google Scholar
     

  • Dai, J., Upadhyay, S., Aivodji, U., Bach, S. H. & Lakkaraju, H. Fairness via explanation quality: evaluating disparities in the quality of post hoc explanations. In Proc. 2022 AAAI/ACM Conference on AI, Ethics, and Society 203–214 (Association for Computing Machinery, 2022).

  • Agarwal, C., Queen, O., Lakkaraju, H. & Zitnik, M. Evaluating explainability for graph neural networks. Sci. Data 10, 144 (2023).

    Article 

    Google Scholar
     

  • Retzlaff, C. O. et al. Post-hoc vs ante-hoc explanations: XAI design guidelines for data scientists. Cogn. Syst. Res. 86, 101243 (2024).

    Article 
    MATH 

    Google Scholar
     

  • Atz, K., Grisoni, F. & Schneider, G. Geometric deep learning on molecular representations. Nat. Mach. Intell 3, 1023–1032 (2021).

    Article 

    Google Scholar
     

  • Gagliardi, L. et al. SHREC 2022: protein–ligand binding site recognition. Comput. Graph. 107, 20–31 (2022).

  • Knyazev, B., Taylor, G. W. & Amer, M. Understanding attention and generalization in graph neural networks. Adv. Neural Inf. Process. Syst. 32, 4202–4212 (2019).

  • Baldassarre, F. & Azizpour, H. Explainability techniques for graph convolutional networks. In International Conference on Machine Learning (ICML) Workshops, 2019 Workshop on Learning and Reasoning with Graph-Structured Representations (2019).

  • Montavon, G. et al. Layer-wise relevance propagation: an overview. In Explainable AI: Interpreting, Explaining and Visualizing Deep Learning (eds Samek, W. et al.) 193–209 (Springer, 2019).

  • Xiong, P., Schnake, T., Montavon, G., Müller, K.-R. & Nakajima, S. Efficient computation of higher-order subgraph attribution via message passing. In International Conference on Machine Learning 24478–24495 (PMLR, 2022).

  • Pope, P. E., Kolouri, S., Rostami, M., Martin, C. E. & Hoffmann, H. Explainability methods for graph convolutional neural networks. In Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition 10772–10781 (IEEE, 2019).

  • Schlichtkrull, M. S., Cao, N. D. & Titov, I. Interpreting graph neural networks for NLP with differentiable edge masking. In 9th International Conference on Learning Representations (2021).

  • Bui, N., Nguyen, H. T., Nguyen, V. A., & Ying, R. Explaining graph neural networks via structure-aware interaction index. In Proc. 41st International Conference on Machine Learning 4854–4883 (PMLR, 2021).

  • Silver, D. et al. Mastering the game of go without human knowledge. Nature 550, 354–359 (2017).

    Article 
    MATH 

    Google Scholar
     

  • Huang, Q., Yamada, M., Tian, Y., Singh, D. & Chang, Y. GraphLIME: local interpretable model explanations for graph neural networks. IEEE Trans. Knowl. Data Eng. 35, 6968–6972 (2023).

    Article 

    Google Scholar
     

  • Zhang, Y., Defazio, D. & Ramesh, A. RelEx: a model-agnostic relational model explainer. In Proc. 2021 AAAI/ACM Conference on AI, Ethics, and Society 1042–1049 (Association for Computing Machinery, 2021).

  • Velickovic, P. et al. Graph attention networks. In 6th International Conference on Learning Representations (2018).

  • Ma, L., Rabbany, R. & Romero-Soriano, A. Graph attention networks with positional embeddings. In Pacific-Asia Conference on Knowledge Discovery and Data Mining 514–527 (Springer, 2021).

  • Yu, J. et al. Graph information bottleneck for subgraph recognition. In 9th International Conference on Learning Representations (2021).

  • Wu, Y., Wang, X., Zhang, A., He, X. & Chua, T. Discovering invariant rationales for graph neural networks. In The Tenth International Conference on Learning Representations (2022).

  • De Leo, K. et al. Search for the lepton flavor violating τ → 3μ decay in proton–proton collisions at 13 tev. Phys. Lett. B 853, 138633 (2024).

  • Oerter, R. The Theory of Almost Everything: The Standard Model, the Unsung Triumph of Modern Physics (Penguin, 2006).

  • Blackstone, P., Fael, M. & Passemar, E. τμμμ at a rate of one out of 1014 tau decays? Eur. Phys. J. C (2020).

  • Calibbi, L. & Signorelli, G. Charged lepton flavour violation: an experimental and theoretical introduction. Riv. Nuovo Cim. 41, 71–174 (2018).

  • Atlas Collaboration. Search for charged-lepton-flavour violation in Z-boson decays with the ATLAS detector. Nat. Phys. 17, 819–825 (2021).

  • Faber, F. A. et al. Prediction errors of molecular machine learning models lower than hybrid DFT error. J. Chem. Theory Comput. 13, 5255–5264 (2017).

    Article 
    MATH 

    Google Scholar
     

  • Wang, R., Fang, X., Lu, Y., Yang, C.-Y. & Wang, S. The PDBbind database: methodologies and updates. J. Med. Chem. 48, 4111–4119 (2005).

    Article 
    MATH 

    Google Scholar
     

  • Miao, S. Interpretable geometric deep learning. Zenodo https://doi.org/10.5281/zenodo.7265547 (2022).



  • Source link

    Leave a Comment