Abstract:
Deep learning revived interest in neural architectures that can process complex graph-based structures, inspired by Graph Neural Networks (GNNs). The Scarselli et al. 2009 GNN model encodes the graph nodes’ states using an iterative diffusion procedure that must be computed at every epoch until the fixed point of a learnable state transition function is reached, propagating the information to neighboring nodes. We propose a Lagrangian constrained optimization-based GNN learning method. A constraint satisfaction mechanism implicitly expresses the state convergence procedure, avoiding iterative epoch-wise procedures and network unfolding, to learn the transition function and node states. Our computational structure finds Lagrangian saddle points in the adjoint space of weights, nodes, state variables, and Lagrange multipliers. Multiple constraints speed diffusion. The proposed method outperforms popular models on several benchmarks.
Note: Please discuss with our team before submitting this abstract to the college. This Abstract or Synopsis varies based on student project requirements.
Did you like this final year project?
To download this project Code with thesis report and project training... Click Here