Publications

You can also find my articles on my Google Scholar profile.

Journal Articles


Core-periphery detection based on masked Bayesian nonnegative matrix factorization

Published in IEEE Transactions on Computational Social Systems (IEEE TCSS), 2024

Core–periphery structure is an essential mesoscale feature in complex networks. Previous researches mostly focus on discriminative approaches, while in this work we propose a generative model called masked Bayesian nonnegative matrix factorization. We build the model using two pair affiliation matrices to indicate core–periphery pair associations and using a mask matrix to highlight connections to core nodes. We propose an approach to infer the model parameters and prove the convergence of variables with our approach. Besides the abilities as traditional approaches, it is able to identify core scores with overlapping core–periphery pairs. We verify the effectiveness of our method using randomly generated networks and real-world networks. Experimental results demonstrate that the proposed method outperforms traditional approaches.

Recommended citation: Wang, Zhonghao, et al. "Core–periphery detection based on masked Bayesian nonnegative matrix factorization." IEEE Transactions on Computational Social Systems 11.3 (2024): 4102-4113.
Download Paper | Download Bibtex

Conference Papers


NoisyGL: A Comprehensive Benchmark for Graph Neural Networks under Label Noise

Published in Advances in Neural Information Processing Systems 37 (NeurIPS 2024), 2024

Graph Neural Networks (GNNs) exhibit strong potential in node classification task through a message-passing mechanism. However, their performance often hinges on high-quality node labels, which are challenging to obtain in real-world scenarios due to unreliable sources or adversarial attacks. Consequently, label noise is common in real-world graph data, negatively impacting GNNs by propagating incorrect information during training. To address this issue, the study of Graph Neural Networks under Label Noise (GLN) has recently gained traction. However, due to variations in dataset selection, data splitting, and preprocessing techniques, the community currently lacks a comprehensive benchmark, which impedes deeper understanding and further development of GLN. To fill this gap, we introduce NoisyGL in this paper, the first comprehensive benchmark for graph neural networks under label noise. NoisyGL enables fair comparisons and detailed analyses of GLN methods on noisy labeled graph data across various datasets, with unified experimental settings and interface. Our benchmark has uncovered several important insights that were missed in previous research, and we believe these findings will be highly beneficial for future studies. We hope our open-source benchmark library will foster further advancements in this field. The code of the benchmark can be found in https://github.com/eaglelab-zju/NoisyGL.

Recommended citation: Wang, Zhonghao, et al. "NoisyGL: A Comprehensive Benchmark for Graph Neural Networks under Label Noise." Advances in Neural Information Processing Systems 37 (2024): 38142-38170.
Download Paper | Download Bibtex