Research on Graph Neural Networks
Graph Neural Networks (GNNs) process irregular data structures where relationships between entities are as important as the entities themselves. Current implementations face computational barriers when scaling to graphs with millions of nodes, with memory requirements growing quadratically and message-passing operations becoming prohibitively expensive beyond certain thresholds. Real-world applications like molecular interaction networks or social graphs often exceed these practical limits.
The fundamental challenge lies in balancing the expressiveness of node representations with the computational efficiency needed for large-scale graph processing.
This page brings together solutions from recent research—including attention-based architectures, sampling strategies for large graphs, hierarchical approaches to graph representation, and memory-efficient message passing schemes. These and other approaches focus on making GNNs practical for industrial-scale graph applications while preserving their ability to capture complex structural patterns.
1. Reimagining Graph Classification from a Prototype View with Optimal Transport: Algorithm and Theorem
Chen Qian, Huayi Tang, Hong Liang - ACM, 2024
Recently, Graph Neural Networks (GNNs) have achieved inspiring performances in graph classification tasks. However, the message passing mechanism in GNNs implicitly utilizes the topological information of the graph, which may lead to a potential loss of structural information. Furthermore, the graph classification decision process based on GNNs resembles a black box and lacks sufficient transparency. The non-linear classifier following the GNNs also defaults to the assumption that each class is represented by a single vector, thereby limiting the diversity of intra-class representations.
2. Graph Convolutional Neural Networks In The Companion Model
J. Y. Shi, Shreyas Chaudhari, José M. F. Moura - IEEE, 2024
Graph Convolutional Neural Networks (graph CNNs) adapt the traditional CNN architecture for use on graphs, replacing convolution layers with graph convolution layers. Although similar in architecture, graph CNNs are used for geometric deep learning whereas conventional CNNs are used for deep learning on grid-based data, such as audio or images, with seemingly no direct relationship between the two classes of neural networks.This paper shows that under certain conditions traditional CNNs can be used with graph data as a good approximation to graph CNNs, avoiding the need for graph CNNs. We show this by using an alternative graph signal representation – the graph companion model that we recently proposed in [1]. Instead of using the given graph and signal in the nodal domain, the graph companion model uses the equivalent companion graph and signal representation in the companion domain. By this way, the graph CNN architecture in the nodal domain is equivalent to our deep learning architecture: a traditional CNN in the companion domain with appropriate boundary conditions (b.c.). The pa... Read More
3. AutoFGNN: A Framework for Extracting All Frequency Information from Large-Scale Graphs
Qi Zhang, Yanfeng Sun, Jipeng Guo - IEEE, 2024
As a powerful model for deep learning on graph-structured data, the scalability limitation of Graph Neural Networks (GNNs) are receiving increasing attention. To tackle this limitation, two categories of scalable GNNs have been proposed: sampling-based and model simplification methods. However, sampling-based methods suffer from high communication costs and poor performance due to the sampling process. Conversely, existing model simplification methods only rely on parameter-free feature propagation, disregarding its spectral properties. Consequently, these methods can only capture low-frequency information while disregarding valuable middle- and high-frequency information. This paper proposes Automatic Filtering Graph Neural Networks (AutoFGNN), a framework that can extract all frequency information from large-scale graphs. AutoFGNN employs parameter-free low-, middle-, and high-pass filters, which extract the corresponding information for all nodes without introducing parameters. To merge the extracted features, a trainable transformer-based information fusion module is utilized, en... Read More
4. GraphSAGE++: Weighted Multi-scale GNN for Graph Representation Learning
E Jiawei, Yinglong Zhang, Shangying Yang - Springer Science and Business Media LLC, 2024
Abstract Graph neural networks (GNNs) have emerged as a powerful tool in graph representation learning. However, they are increasingly challenged by over-smoothing as network depth grows, compromising their ability to capture and represent complex graph structures. Additionally, some popular GNN variants only consider local neighbor information during node updating, ignoring the global structural information and leading to inadequate learning and differentiation of graph structures. To address these challenges, we introduce a novel graph neural network framework, GraphSAGE++. Our model extracts the representation of the target node at each layer and then concatenates all layer weighted representations to obtain the final result. In addition, the strategies combining double aggregations with weighted concatenation are proposed, which significantly enhance the model’s discernment and preservation of structural information. Empirical results on various datasets demonstrate that GraphSAGE++ excels in vertex classification, link prediction, and visualization tasks, surpassing existing met... Read More
5. The Evolution of Distributed Systems for Graph Neural Networks and Their Origin in Graph Processing and Deep Learning: A Survey
Jana Vatter, Ruben Mayer, Hans‐Arno Jacobsen - Association for Computing Machinery (ACM), 2024
Graph neural networks (GNNs) are an emerging research field. This specialized deep neural network architecture is capable of processing graph structured data and bridges the gap between graph processing and deep learning. As graphs are everywhere, GNNs can be applied to various domains including recommendation systems, computer vision, natural language processing, biology, and chemistry. With the rapid growing size of real-world graphs, the need for efficient and scalable GNN training solutions has come. Consequently, many works proposing GNN systems have emerged throughout the past few years. However, there is an acute lack of overview, categorization, and comparison of such systems. We aim to fill this gap by summarizing and categorizing important methods and techniques for large-scale GNN solutions. Additionally, we establish connections between GNN systems, graph processing systems, and deep learning systems.
6. Neural Architecture Search for GNN-Based Graph Classification
Lanning Wei, Huan Zhao, Zhiqiang He - Association for Computing Machinery (ACM), 2024
Graph classification is an important problem with applications across many domains, for which graph neural networks (GNNs) have been state-of-the-art (SOTA) methods. In the literature, to adopt GNNs for the graph classification task, there are two groups of methods: global pooling and hierarchical pooling. The global pooling methods obtain the graph representation vectors by globally pooling all of the node embeddings together at the end of several GNN layers, whereas the hierarchical pooling methods provide one extra pooling operation between the GNN layers to extract hierarchical information and improve the graph representations. Both global and hierarchical pooling methods are effective in different scenarios. Due to highly diverse applications, it is challenging to design data-specific pooling methods with human expertise. To address this problem, we propose PAS (Pooling Architecture Search) to design adaptive pooling architectures by using the neural architecture search (NAS). To enable the search space design, we propose a unified pooling framework consisting of four modules: A... Read More
7. A review of graph neural networks: concepts, architectures, techniques, challenges, datasets, applications, and future directions
Bharti Khemani, Shruti Patil, Ketan Kotecha - Springer Science and Business Media LLC, 2024
Abstract Deep learning has seen significant growth recently and is now applied to a wide range of conventional use cases, including graphs. Graph data provides relational information between elements and is a standard data format for various machine learning and deep learning tasks. Models that can learn from such inputs are essential for working with graph data effectively. This paper identifies nodes and edges within specific applications, such as text, entities, and relations, to create graph structures. Different applications may require various graph neural network (GNN) models. GNNs facilitate the exchange of information between nodes in a graph, enabling them to understand dependencies within the nodes and edges. The paper delves into specific GNN models like graph convolution networks (GCNs), GraphSAGE, and graph attention networks (GATs), which are widely used in various applications today. It also discusses the message-passing mechanism employed by GNN models and examines the strengths and limitations of these models in different domains. Furthermore, the paper explores the... Read More
8. Foundations and Frontiers of Graph Learning Theory
Yu Huang, Min Zhou, Meng‐Lin Yang, 2024
Recent advancements in graph learning have revolutionized the way to understand and analyze data with complex structures. Notably, Graph Neural Networks (GNNs), i.e. neural network architectures designed for learning graph representations, have become a popular paradigm. With these models being usually characterized by intuition-driven design or highly intricate components, placing them within the theoretical analysis framework to distill the core concepts, helps understand the key principles that drive the functionality better and guide further development. Given this surge in interest, this article provides a comprehensive summary of the theoretical foundations and breakthroughs concerning the approximation and learning behaviors intrinsic to prevalent graph learning models. Encompassing discussions on fundamental aspects such as expressiveness power, generalization, optimization, and unique phenomena such as over-smoothing and over-squashing, this piece delves into the theoretical foundations and frontier driving the evolution of graph learning. In addition, this article also pres... Read More
9. Graphs Unveiled: Graph Neural Networks and Graph Generation
László Kovács, Ali Jlidi, 2024
One of the hot topics in machine learning is the field of GNN. The complexity of graph data has imposed significant challenges on existing machine learning algorithms. Recently, many studies on extending deep learning approaches for graph data have emerged. This paper represents a survey, providing a comprehensive overview of Graph Neural Networks (GNNs). We discuss the applications of graph neural networks across various domains. Finally, we present an advanced field in GNNs: graph generation.
10. GTAGCN: Generalized Topology Adaptive Graph Convolutional Networks
Sukhdeep Singh, Anuj Sharma, Vinod Kumar Chauhan, 2024
Graph Neural Networks (GNN) have emerged as a popular and standard approach for learning from graph-structured data. The literature on GNN highlights the potential of this evolving research area and its widespread adoption in real-life applications. However, most of the approaches are either new in concept or derived from specific techniques. Therefore, the potential of more than one approach in hybrid form has not been studied extensively, which can be well utilized for sequenced data or static data together. We derive a hybrid approach based on two established techniques as generalized aggregation networks and topology adaptive graph convolution networks that solve our purpose to apply on both types of sequenced and static nature of data, effectively. The proposed method applies to both node and graph classification. Our empirical analysis reveals that the results are at par with literature results and better for handwritten strokes as sequenced data, where graph structures have not been explored.
11. The $\mu\mathcal{G}$ Language for Programming Graph Neural Networks
Matteo Belenchia, Flavio Corradini, Michela Quadrini, 2024
Graph neural networks form a class of deep learning architectures specifically designed to work with graph-structured data. As such, they share the inherent limitations and problems of deep learning, especially regarding the issues of explainability and trustworthiness. We propose $\mu\mathcal{G}$, an original domain-specific language for the specification of graph neural networks that aims to overcome these issues. The language's syntax is introduced, and its meaning is rigorously defined by a denotational semantics. An equivalent characterization in the form of an operational semantics is also provided and, together with a type system, is used to prove the type soundness of $\mu\mathcal{G}$. We show how $\mu\mathcal{G}$ programs can be represented in a more user-friendly graphical visualization, and provide examples of its generality by showing how it can be used to define some of the most popular graph neural network models, or to develop any custom graph processing application.
12. Graph Neural Network, ChebNet, Graph Convolutional Network, and Graph Autoencoder: Tutorial and Survey
Benyamin Ghojogh, Ali Ghodsi - Center for Open Science, 2024
This is a tutorial paper on graph neural networks including ChebNet, graph convolutional network, graph attention network, and graph autoencoder. It starts with Laplacian of graph, graph Fourier transform, and graph convolution. Then, it is explained how Chebyshev polynomials are used in graph networks to have ChebNet. Afterwards, graph convolutional network and its general framework are introduced. Then, graph attention network is explained as a combination of attention mechanism and graph neural networks. Finally, graph reconstruction autoencoder and graph variational autoencoder are introduced.
13. Depth-adaptive graph neural architecture search for graph classification
Zhenpeng Wu, Jiamin Chen, Raeed Al-Sabri - Elsevier BV, 2024
In recent years, graph neural networks (GNNs) based on neighborhood aggregation schemes have become a promising method in various graph-based applications. To solve the expert-dependent and time-consuming problem in human-designed GNN architectures, graph neural architecture search (GNAS) has been popular. However, as the mainstream GNAS methods automatically design GNN architectures with fixed GNN depth, they cannot mine the true potential of GNN architectures for graph classification. Although a few GNAS methods have explored the importance of adaptive GNN depth based on fixed GNN architectures, they have not designed a general search space for graph classification, which limits the discovery of excellent GNN architectures. In this paper, we propose Depth-Adaptive Graph Neural Architecture Search for Graph Classification (DAGC), which systemically constructs and explores the search space for graph classification, rather than studying individual designs. Through decoupling the graph classification process, DAGC proposes a complete and flexible search space, including GNN depth, aggr... Read More
14. On the Expressive Power of Graph Neural Networks
Ashwin Nalwade, Kelly Marshall, Axel Eladi, 2024
The study of Graph Neural Networks has received considerable interest in the past few years. By extending deep learning to graph-structured data, GNNs can solve a diverse set of tasks in fields including social science, chemistry, and medicine. The development of GNN architectures has largely been focused on improving empirical performance on tasks like node or graph classification. However, a line of recent work has instead sought to find GNN architectures that have desirable theoretical properties - by studying their expressive power and designing architectures that maximize this expressiveness. While there is no consensus on the best way to define the expressiveness of a GNN, it can be viewed from several well-motivated perspectives. Perhaps the most natural approach is to study the universal approximation properties of GNNs, much in the way that this has been studied extensively for MLPs. Another direction focuses on the extent to which GNNs can distinguish between different graph structures, relating this to the graph isomorphism test. Besides, a GNN's ability to compute graph p... Read More
15. Graph Condensation: A Survey
Xinyi Gao, Junliang Yu, Wei Jiang, 2024
The burgeoning volume of graph data poses significant challenges in storage, transmission, and particularly the training of graph neural networks (GNNs). To address these challenges, graph condensation (GC) has emerged as an innovative solution. GC focuses on synthesizing a compact yet highly representative graph, on which GNNs can achieve performance comparable to trained on the large original graph. The notable efficacy of GC and its broad prospects have garnered significant attention and spurred extensive research. This survey paper provides an up-to-date and systematic overview of GC, organizing existing research into four categories aligned with critical GC evaluation criteria: effectiveness, generalization, fairness, and efficiency. To facilitate an in-depth and comprehensive understanding of GC, we examine various methods under each category and thoroughly discuss two essential components within GC: optimization strategies and condensed graph generation. Additionally, we introduce the applications of GC in a variety of fields, and highlight the present challenges and novel ins... Read More
16. Layer-Wise Training for Self-Supervised Learning on Graphs
Oscar Pina, Verónica Vilaplana - Elsevier BV, 2024
End-to-end training of graph neural networks (GNN) on large graphs presents several memory and computational challenges, and limits the application to shallow architectures as depth exponentially increases the memory and space complexities. In this manuscript, we propose Layer-wise Regularized Graph Infomax, an algorithm to train GNNs layer by layer in a self-supervised manner. We decouple the feature propagation and feature transformation carried out by GNNs to learn node representations in order to derive a loss function based on the prediction of future inputs. We evaluate the algorithm in inductive large graphs and show similar performance to other end to end methods and a substantially increased efficiency, which enables the training of more sophisticated models in one single device. We also show that our algorithm avoids the oversmoothing of the representations, another common challenge of deep GNNs.
17. A Survey on Graph Neural Network Acceleration: A Hardware Perspective
Chen Shi, Jingyu Liu, Li Shen - Institute of Electrical and Electronics Engineers (IEEE), 2024
Graph neural networks (GNNs) have emerged as powerful approaches to learn knowledge about graphs and vertices. The rapid employment of GNNs poses requirements for processing efficiency. Due to incompatibility of general platforms, dedicated hardware devices and platforms are developed to efficiently accelerate training and inference of GNNs. We conduct a survey on hardware acceleration for GNNs. We first include and introduce recent advances of the domain, and then provide a methodology of categorization to classify existing works into three categories. Next, we discuss optimization techniques adopted at different levels. And finally we propose suggestions on future directions to facilitate further works.
18. Towards a Theory of Machine Learning on Graphs and its Applications in Combinatorial Optimization
Christopher Morris - International Joint Conferences on Artificial Intelligence Organization, 2024
Machine learning on graphs, especially using graph neural networks (GNNs), has seen a surge in interest due to the wide availability of graph data across many disciplines, from life and physical to social and engineering sciences. Despite their practical success, our theoretical understanding of the properties of GNNs remains incomplete. Here, we survey the author's and his collaborators' progress in developing a deeper theoretical understanding of GNNs' expressive power and generalization abilities. In addition, we overview recent progress in using GNNs to speed up solvers for hard combinatorial optimization tasks.
19. GC-Bench: A Benchmark Framework for Graph Condensation with New Insights
Shengbo Gong, Juntong Ni, Noveen Sachdeva, 2024
Graph condensation (GC) is an emerging technique designed to learn a significantly smaller graph that retains the essential information of the original graph. This condensed graph has shown promise in accelerating graph neural networks while preserving performance comparable to those achieved with the original, larger graphs. Additionally, this technique facilitates downstream applications such as neural architecture search and enhances our understanding of redundancy in large graphs. Despite the rapid development of GC methods, a systematic evaluation framework remains absent, which is necessary to clarify the critical designs for particular evaluative aspects. Furthermore, several meaningful questions have not been investigated, such as whether GC inherently preserves certain graph properties and offers robustness even without targeted design efforts. In this paper, we introduce GC-Bench, a comprehensive framework to evaluate recent GC methods across multiple dimensions and to generate new insights. Our experimental findings provide a deeper insights into the GC process and the cha... Read More
20. Scalable Graph Compressed Convolutions
Junshu Sun, Chenxue Yang, Shuhui Wang, 2024
Designing effective graph neural networks (GNNs) with message passing has two fundamental challenges, i.e., determining optimal message-passing pathways and designing local aggregators. Previous methods of designing optimal pathways are limited with information loss on the input features. On the other hand, existing local aggregators generally fail to extract multi-scale features and approximate diverse operators under limited parameter scales. In contrast to these methods, Euclidean convolution has been proven as an expressive aggregator, making it a perfect candidate for GNN construction. However, the challenges of generalizing Euclidean convolution to graphs arise from the irregular structure of graphs. To bridge the gap between Euclidean space and graph topology, we propose a differentiable method that applies permutations to calibrate input graphs for Euclidean convolution. The permutations constrain all nodes in a row regardless of their input order and therefore enable the flexible generalization of Euclidean convolution to graphs. Based on the graph calibration, we propose th... Read More
Get Full Report
Access our comprehensive collection of 162 documents related to this technology