Layer-stacked Attention for Heterogeneous Network Embedding
Published in Arxiv preprint, 2020
This paper proposes a new architecture called Layer-stacked Attention for Heterogeneous Network Embedding (LATTE) to address the challenges of aggregating higher-order information in heterogeneous networks while maintaining interpretability. The proposed method decomposes higher-order meta relations at each layer to extract relevant neighborhood structures for each node, and successively stacks layer representations to offer a more interpretable aggregation scheme for nodes of different types at different neighborhood ranges. The experimental results on several benchmark heterogeneous network datasets demonstrate that LATTE can achieve state-of-the-art performance compared to existing approaches, while offering a lightweight model. The proposed method can extract informative insights on heterogeneous networks and generalize to unseen nodes by accurately encoding their links and attributes.
Recommended citation:
@article{tran2020layer,
title={Layer-stacked Attention for Heterogeneous Network Embedding},
author={Tran, Nhat and Gao, Jean},
journal={arXiv preprint arXiv:2009.08072},
year={2020}
}.