Memory-Efficient Graph Convolutional Network-Based Distributed Link Prediction
Isuru Wijesiri and others
In 2020 IEEE International Conference on Big Data (Big Data), 2020
Introduces a memory-efficient, federated training scheme that partitions large graphs across JasmineGraph and runs parallel GCN workers. This lets a single commodity server train link-prediction models on datasets as big as DBLP-V11 (10+ GB) that conventional GCNs cannot load, and cuts training time by 3-5x on Reddit and Twitter graphs compared with standard, in-memory methods.