Abstract:
Graph Neural Networks (GNNs) excel at recommendation, node classification, and link prediction. Graph neural network models create node embeddings by combining node features with neighboring node information. Most GNN models use a single aggregator (e.g., mean-pooling) to aggregate neighboring nodes’ information and add or concatenate the output to the center node’s representation vector. However, using only one type of aggregator makes it hard to capture neighboring information, and simple addition or concatenation update methods limit GNN expressiveness. Moreover, existing supervised or semi-supervised GNN models are trained based on the node label loss function, ignoring graph structure information. This paper introduces the Graph Attention & Interaction Network (GAIN) for graph inductive learning. We use multiple types of aggregators to gather neighboring information in different aspects and integrate their outputs through the aggregator-level attention mechanism, unlike previous GNN models that only use one type. To better represent the graph’s topology, we design a graph regularized loss. We introduce graph feature interaction and propose a vector-wise explicit feature interaction mechanism to update node embeddings. Two node-classification benchmarks and a financial news dataset are tested extensively. Our GAIN model outperforms state-of-the-art on all tasks, according to experiments.
Note: Please discuss with our team before submitting this abstract to the college. This Abstract or Synopsis varies based on student project requirements.
Did you like this final year project?
To download this project Code with thesis report and project training... Click Here