CSpace
MNL: A Highly-Efficient Model for Large-scale Dynamic Weighted Directed Network Representation
Chen, Minzhi1,2,3; He, Chunlin4; Luo, Xin4,5
2023-06-01
摘要A Non-negative Latent-factorization-of-tensors model relying on a Nonnegative and Multiplicative Update on Incomplete Tensors (NMU-IT) algorithm facilitates efficient representation learning to a Dynamic Weighted Directed Network (DWDN). However, a NMU-IT algorithm leads to slow model convergence and inefficient selection of hyper-parameters. Aiming to address these challenging issues, this work proposes a Momentum-incorporated Biased Non-negative and Adaptive Latent-factorization-of-tensors (MNL) model. It adopts two-fold ideas: 1) incorporating a generalized momentum method into the NMU-IT algorithm to enable fast model convergence; 2) facilitating hyper-parameter slef-adaptation via Particle Swarm Optimization. Empirical studies on four real DWDNs indicate that the proposed MNL is superior to state-of-the-art models in performing efficient representation learning to a DWDN, which is definitely supported by its high computational efficiency and prediction accuracy for missing links of a DWDN. Moreover, its hyper-parameter-free training enables its high practicability in real scenes.
关键词Tensors Data models Computational modeling Adaptation models Analytical models Big Data Heuristic algorithms Dynamic weighted directed network high-dimensional and incomplete tensor non-negative latent-factorization-of-tensors linear bias high dimensional and incomplete momentum method particle swarm optimization adaptive model
DOI10.1109/TBDATA.2022.3218064
发表期刊IEEE TRANSACTIONS ON BIG DATA
ISSN2332-7790
卷号9期号:3页码:889-903
通讯作者He, Chunlin(hechunlin@cwnu.edu.cn) ; Luo, Xin(luoxin21@gmail.com)
收录类别SCI
WOS记录号WOS:000988277900009
语种英语