Federated GNN for Distributed Link Prediction
Memory-efficient federated graph convolutional network training on commodity hardware (IEEE Big Data 2020)
Applied federated learning for facilitating collaborative, privacy-preserving graph learning among organizations that use distributed graph database systems. Implementation was based on JasmineGraph distributed graph database system.
- Built memory-efficient distributed/federated GCN training enabling training on very large graphs on commodity hardware using JasmineGraph and parallel workers.
- Developed novel aggregation mechanisms for heterogeneous multi-organization graph learning while maintaining privacy.
- Peer-reviewed publication at IEEE Big Data 2020.
Technologies: C++, Python, StellarGraph, TensorFlow
Supervisors: Prof. Sanath Jayasena, Dr. Miyuru Dayarathna
| Links: Publication | Source |