PairNorm: Tackling Oversmoothing in GNNs

Lingxiao Zhao, Leman Akoglu

Keywords: graph networks, normalization

Mon Session 1 (05:00-07:00 GMT) [Live QA] [Cal]
Mon Session 3 (12:00-14:00 GMT) [Live QA] [Cal]

Abstract: The performance of graph neural nets (GNNs) is known to gradually decrease with increasing number of layers. This decay is partly attributed to oversmoothing, where repeated graph convolutions eventually make node embeddings indistinguishable. We take a closer look at two different interpretations, aiming to quantify oversmoothing. Our main contribution is PairNorm, a novel normalization layer that is based on a careful analysis of the graph convolution operator, which prevents all node embeddings from becoming too similar. What is more, PairNorm is fast, easy to implement without any change to network architecture nor any additional parameters, and is broadly applicable to any GNN. Experiments on real-world graphs demonstrate that PairNorm makes deeper GCN, GAT, and SGC models more robust against oversmoothing, and significantly boosts performance for a new problem setting that benefits from deeper GNNs. Code is available at https://github.com/LingxiaoShawn/PairNorm.

Similar Papers

DropEdge: Towards Deep Graph Convolutional Networks on Node Classification
Yu Rong, Wenbing Huang, Tingyang Xu, Junzhou Huang,
Memory-Based Graph Networks
Amir Hosein Khasahmadi, Kaveh Hassani, Parsa Moradi, Leo Lee, Quaid Morris,
DeepSphere: a graph-based spherical CNN
Michaël Defferrard, Martino Milani, Frédérick Gusset, Nathanaël Perraudin,
Towards Stabilizing Batch Statistics in Backward Propagation of Batch Normalization
Junjie Yan, Ruosi Wan, Xiangyu Zhang, Wei Zhang, Yichen Wei, Jian Sun,