Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Information sciences, 2024-06, Vol.670, p.120608, Article 120608
Ort / Verlag
Elsevier Inc
Erscheinungsjahr
2024
Link zum Volltext
Quelle
Alma/SFX Local Collection
Beschreibungen/Notizen
Distribution shift widely exists in graph representation learning and often reduces model performance. This work investigates how to improve the performance of a graph neural network (GNN) in a single graph by controlling distribution shift between embedding spaces. Specifically, we provide an upper error-bound estimation, which quantitatively analyzes how distribution shift affects GNNs' performance in a single graph. Considering that there is no natural domain division in a single graph, we propose PW-GNN to simultaneously learn discriminative embedding and reduce distribution shift. PW-GNN measures distribution discrepancy using the distance between test embeddings and prototypes, and transfers minimizing distribution shift to minimizing the power of Wasserstein distance, which is introduced into GNNs as a regularizer. A series of theoretical analyses are carried out to demonstrate the effectiveness of PW-GNN. Besides, a low-complexity training algorithm is designed by exploring entropy-regularized strategy and block coordinate descent method. Extensive numerical experiments are conducted on different datasets with both biased and unbiased splits. We empirically test our model equipped with four backbone models. Results show that PW-GNN outperforms state-of-the-art baselines and mitigates up to 8% of negative effects off distribution shift on backbones.