Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Generating Real-Time Explanations for GNNs via Multiple Specialty Learners and Online Knowledge Distillation
Ist Teil von
IEEE access, 2023, Vol.11, p.40790-40808
Ort / Verlag
Piscataway: IEEE
Erscheinungsjahr
2023
Quelle
EZB Electronic Journals Library
Beschreibungen/Notizen
Graph Neural Networks have become increasingly ubiquitous in numerous applications, necessitating explanations of their predictions. However, explaining GNNs is challenging due to the complexity of graph data and model execution. Post-hoc explanation approaches have gained popularity due to their versatility, despite their additional computational costs. Although intrinsically interpretable models can provide instant explanations, they are usually model-specific and can only explain particular GNNs. To address these challenges, we propose a novel, general, and fast GNN explanation framework named SCALE. SCALE trains multiple specialty learners to explain GNNs, as creating a single powerful explainer for examining the attributions of interactions in input graphs is complicated. In training, a black-box GNN model guides learners based on an online knowledge distillation paradigm. During the explanation phase, explanations of predictions are generated by multiple explainers corresponding to trained learners. Edge masking and random walk with restart procedures are implemented to provide structural explanations for graph-level and node-level predictions. A feature attribution module provides overall summaries and instance-level feature contributions. We compare SCALE with state-of-the-art baselines through extensive experiments to demonstrate its explanation correctness and execution performance. Furthermore, we conduct a user study and a series of ablation studies to understand its strengths and weaknesses.