Graph Neural Networks (GNNs) process irregular data structures where relationships between entities are as important as the entities themselves. Current implementations face computational barriers when scaling to graphs with millions of nodes, with memory requirements growing quadratically and message-passing operations becoming prohibitively expensive beyond certain thresholds. Real-world applications like molecular interaction networks or social graphs often exceed these practical limits.

The fundamental challenge lies in balancing the expressiveness of node representations with the computational efficiency needed for large-scale graph processing.

This page brings together solutions from recent research—including attention-based architectures, sampling strategies for large graphs, hierarchical approaches to graph representation, and memory-efficient message passing schemes. These and other approaches focus on making GNNs practical for industrial-scale graph applications while preserving their ability to capture complex structural patterns.

1. Reimagining Graph Classification from a Prototype View with Optimal Transport: Algorithm and Theorem

Chen Qian, Huayi Tang, Hong Liang - ACM, 2024

Recently, Graph Neural Networks (GNNs) have achieved inspiring performances in graph classification tasks. However, the message passing mechanism in GNNs implicitly utilizes the topological information of the graph, which may lead to a potential loss of structural information. Furthermore, the graph classification decision process based on GNNs resembles a black box and lacks sufficient transparency. The non-linear classifier following the GNNs also defaults to the assumption that each class is represented by a single vector, thereby limiting the diversity of intra-class representations.

2. Graph Convolutional Neural Networks In The Companion Model

J. Y. Shi, Shreyas Chaudhari, José M. F. Moura - IEEE, 2024

Graph Convolutional Neural Networks (graph CNNs) adapt the traditional CNN architecture for use on graphs, replacing convolution layers with graph convolution layers. Although similar in architecture, graph CNNs are used for geometric deep learning whereas conventional CNNs are used for deep learning on grid-based data, such as audio or images, with seemingly no direct relationship between the two classes of neural networks.This paper shows that under certain conditions traditional CNNs can be used with graph data as a good approximation to graph CNNs, avoiding the need for graph CNNs. We show this by using an alternative graph signal representation the graph companion model that we recently proposed in [1]. Instead of using the given graph and signal in the nodal domain, the graph companion model uses the equivalent companion graph and signal representation in the companion domain. By this way, the graph CNN architecture in the nodal domain is equivalent to our deep learning architecture: a traditional CNN in the companion domain with appropriate boundary conditions (b.c.). The pa... Read More

3. AutoFGNN: A Framework for Extracting All Frequency Information from Large-Scale Graphs

Qi Zhang, Yanfeng Sun, Jipeng Guo - IEEE, 2024

As a powerful model for deep learning on graph-structured data, the scalability limitation of Graph Neural Networks (GNNs) are receiving increasing attention. To tackle this limitation, two categories of scalable GNNs have been proposed: sampling-based and model simplification methods. However, sampling-based methods suffer from high communication costs and poor performance due to the sampling process. Conversely, existing model simplification methods only rely on parameter-free feature propagation, disregarding its spectral properties. Consequently, these methods can only capture low-frequency information while disregarding valuable middle- and high-frequency information. This paper proposes Automatic Filtering Graph Neural Networks (AutoFGNN), a framework that can extract all frequency information from large-scale graphs. AutoFGNN employs parameter-free low-, middle-, and high-pass filters, which extract the corresponding information for all nodes without introducing parameters. To merge the extracted features, a trainable transformer-based information fusion module is utilized, en... Read More

4. GraphSAGE++: Weighted Multi-scale GNN for Graph Representation Learning

E Jiawei, Yinglong Zhang, Shangying Yang - Springer Science and Business Media LLC, 2024

Abstract Graph neural networks (GNNs) have emerged as a powerful tool in graph representation learning. However, they are increasingly challenged by over-smoothing as network depth grows, compromising their ability to capture and represent complex graph structures. Additionally, some popular GNN variants only consider local neighbor information during node updating, ignoring the global structural information and leading to inadequate learning and differentiation of graph structures. To address these challenges, we introduce a novel graph neural network framework, GraphSAGE++. Our model extracts the representation of the target node at each layer and then concatenates all layer weighted representations to obtain the final result. In addition, the strategies combining double aggregations with weighted concatenation are proposed, which significantly enhance the models discernment and preservation of structural information. Empirical results on various datasets demonstrate that GraphSAGE++ excels in vertex classification, link prediction, and visualization tasks, surpassing existing met... Read More

5. The Evolution of Distributed Systems for Graph Neural Networks and Their Origin in Graph Processing and Deep Learning: A Survey

Jana Vatter, Ruben Mayer, Hans‐Arno Jacobsen - Association for Computing Machinery (ACM), 2024

Graph neural networks (GNNs) are an emerging research field. This specialized deep neural network architecture is capable of processing graph structured data and bridges the gap between graph processing and deep learning. As graphs are everywhere, GNNs can be applied to various domains including recommendation systems, computer vision, natural language processing, biology, and chemistry. With the rapid growing size of real-world graphs, the need for efficient and scalable GNN training solutions has come. Consequently, many works proposing GNN systems have emerged throughout the past few years. However, there is an acute lack of overview, categorization, and comparison of such systems. We aim to fill this gap by summarizing and categorizing important methods and techniques for large-scale GNN solutions. Additionally, we establish connections between GNN systems, graph processing systems, and deep learning systems.

6. Neural Architecture Search for GNN-Based Graph Classification

Lanning Wei, Huan Zhao, Zhiqiang He - Association for Computing Machinery (ACM), 2024

Graph classification is an important problem with applications across many domains, for which graph neural networks (GNNs) have been state-of-the-art (SOTA) methods. In the literature, to adopt GNNs for the graph classification task, there are two groups of methods: global pooling and hierarchical pooling. The global pooling methods obtain the graph representation vectors by globally pooling all of the node embeddings together at the end of several GNN layers, whereas the hierarchical pooling methods provide one extra pooling operation between the GNN layers to extract hierarchical information and improve the graph representations. Both global and hierarchical pooling methods are effective in different scenarios. Due to highly diverse applications, it is challenging to design data-specific pooling methods with human expertise. To address this problem, we propose PAS (Pooling Architecture Search) to design adaptive pooling architectures by using the neural architecture search (NAS). To enable the search space design, we propose a unified pooling framework consisting of four modules: A... Read More

7. A review of graph neural networks: concepts, architectures, techniques, challenges, datasets, applications, and future directions

Bharti Khemani, Shruti Patil, Ketan Kotecha - Springer Science and Business Media LLC, 2024

Abstract Deep learning has seen significant growth recently and is now applied to a wide range of conventional use cases, including graphs. Graph data provides relational information between elements and is a standard data format for various machine learning and deep learning tasks. Models that can learn from such inputs are essential for working with graph data effectively. This paper identifies nodes and edges within specific applications, such as text, entities, and relations, to create graph structures. Different applications may require various graph neural network (GNN) models. GNNs facilitate the exchange of information between nodes in a graph, enabling them to understand dependencies within the nodes and edges. The paper delves into specific GNN models like graph convolution networks (GCNs), GraphSAGE, and graph attention networks (GATs), which are widely used in various applications today. It also discusses the message-passing mechanism employed by GNN models and examines the strengths and limitations of these models in different domains. Furthermore, the paper explores the... Read More

8. Foundations and Frontiers of Graph Learning Theory

Yu Huang, Min Zhou, Meng‐Lin Yang, 2024

Recent advancements in graph learning have revolutionized the way to understand and analyze data with complex structures. Notably, Graph Neural Networks (GNNs), i.e. neural network architectures designed for learning graph representations, have become a popular paradigm. With these models being usually characterized by intuition-driven design or highly intricate components, placing them within the theoretical analysis framework to distill the core concepts, helps understand the key principles that drive the functionality better and guide further development. Given this surge in interest, this article provides a comprehensive summary of the theoretical foundations and breakthroughs concerning the approximation and learning behaviors intrinsic to prevalent graph learning models. Encompassing discussions on fundamental aspects such as expressiveness power, generalization, optimization, and unique phenomena such as over-smoothing and over-squashing, this piece delves into the theoretical foundations and frontier driving the evolution of graph learning. In addition, this article also pres... Read More

9. Graphs Unveiled: Graph Neural Networks and Graph Generation

László Kovács, Ali Jlidi, 2024

One of the hot topics in machine learning is the field of GNN. The complexity of graph data has imposed significant challenges on existing machine learning algorithms. Recently, many studies on extending deep learning approaches for graph data have emerged. This paper represents a survey, providing a comprehensive overview of Graph Neural Networks (GNNs). We discuss the applications of graph neural networks across various domains. Finally, we present an advanced field in GNNs: graph generation.

10. GTAGCN: Generalized Topology Adaptive Graph Convolutional Networks

Sukhdeep Singh, Anuj Sharma, Vinod Kumar Chauhan, 2024

Graph Neural Networks (GNN) have emerged as a popular and standard approach for learning from graph-structured data. The literature on GNN highlights the potential of this evolving research area and its widespread adoption in real-life applications. However, most of the approaches are either new in concept or derived from specific techniques. Therefore, the potential of more than one approach in hybrid form has not been studied extensively, which can be well utilized for sequenced data or static data together. We derive a hybrid approach based on two established techniques as generalized aggregation networks and topology adaptive graph convolution networks that solve our purpose to apply on both types of sequenced and static nature of data, effectively. The proposed method applies to both node and graph classification. Our empirical analysis reveals that the results are at par with literature results and better for handwritten strokes as sequenced data, where graph structures have not been explored.

11. The $\mu\mathcal{G}$ Language for Programming Graph Neural Networks

Matteo Belenchia, Flavio Corradini, Michela Quadrini, 2024

Graph neural networks form a class of deep learning architectures specifically designed to work with graph-structured data. As such, they share the inherent limitations and problems of deep learning, especially regarding the issues of explainability and trustworthiness. We propose $\mu\mathcal{G}$, an original domain-specific language for the specification of graph neural networks that aims to overcome these issues. The language's syntax is introduced, and its meaning is rigorously defined by a denotational semantics. An equivalent characterization in the form of an operational semantics is also provided and, together with a type system, is used to prove the type soundness of $\mu\mathcal{G}$. We show how $\mu\mathcal{G}$ programs can be represented in a more user-friendly graphical visualization, and provide examples of its generality by showing how it can be used to define some of the most popular graph neural network models, or to develop any custom graph processing application.

12. Graph Neural Network, ChebNet, Graph Convolutional Network, and Graph Autoencoder: Tutorial and Survey

Benyamin Ghojogh, Ali Ghodsi - Center for Open Science, 2024

This is a tutorial paper on graph neural networks including ChebNet, graph convolutional network, graph attention network, and graph autoencoder. It starts with Laplacian of graph, graph Fourier transform, and graph convolution. Then, it is explained how Chebyshev polynomials are used in graph networks to have ChebNet. Afterwards, graph convolutional network and its general framework are introduced. Then, graph attention network is explained as a combination of attention mechanism and graph neural networks. Finally, graph reconstruction autoencoder and graph variational autoencoder are introduced.

13. Depth-adaptive graph neural architecture search for graph classification

Zhenpeng Wu, Jiamin Chen, Raeed Al-Sabri - Elsevier BV, 2024

In recent years, graph neural networks (GNNs) based on neighborhood aggregation schemes have become a promising method in various graph-based applications. To solve the expert-dependent and time-consuming problem in human-designed GNN architectures, graph neural architecture search (GNAS) has been popular. However, as the mainstream GNAS methods automatically design GNN architectures with fixed GNN depth, they cannot mine the true potential of GNN architectures for graph classification. Although a few GNAS methods have explored the importance of adaptive GNN depth based on fixed GNN architectures, they have not designed a general search space for graph classification, which limits the discovery of excellent GNN architectures. In this paper, we propose Depth-Adaptive Graph Neural Architecture Search for Graph Classification (DAGC), which systemically constructs and explores the search space for graph classification, rather than studying individual designs. Through decoupling the graph classification process, DAGC proposes a complete and flexible search space, including GNN depth, aggr... Read More

14. On the Expressive Power of Graph Neural Networks

Ashwin Nalwade, Kelly Marshall, Axel Eladi, 2024

The study of Graph Neural Networks has received considerable interest in the past few years. By extending deep learning to graph-structured data, GNNs can solve a diverse set of tasks in fields including social science, chemistry, and medicine. The development of GNN architectures has largely been focused on improving empirical performance on tasks like node or graph classification. However, a line of recent work has instead sought to find GNN architectures that have desirable theoretical properties - by studying their expressive power and designing architectures that maximize this expressiveness. While there is no consensus on the best way to define the expressiveness of a GNN, it can be viewed from several well-motivated perspectives. Perhaps the most natural approach is to study the universal approximation properties of GNNs, much in the way that this has been studied extensively for MLPs. Another direction focuses on the extent to which GNNs can distinguish between different graph structures, relating this to the graph isomorphism test. Besides, a GNN's ability to compute graph p... Read More

15. Graph Condensation: A Survey

Xinyi Gao, Junliang Yu, Wei Jiang, 2024

The burgeoning volume of graph data poses significant challenges in storage, transmission, and particularly the training of graph neural networks (GNNs). To address these challenges, graph condensation (GC) has emerged as an innovative solution. GC focuses on synthesizing a compact yet highly representative graph, on which GNNs can achieve performance comparable to trained on the large original graph. The notable efficacy of GC and its broad prospects have garnered significant attention and spurred extensive research. This survey paper provides an up-to-date and systematic overview of GC, organizing existing research into four categories aligned with critical GC evaluation criteria: effectiveness, generalization, fairness, and efficiency. To facilitate an in-depth and comprehensive understanding of GC, we examine various methods under each category and thoroughly discuss two essential components within GC: optimization strategies and condensed graph generation. Additionally, we introduce the applications of GC in a variety of fields, and highlight the present challenges and novel ins... Read More

16. Layer-Wise Training for Self-Supervised Learning on Graphs

Oscar Pina, Verónica Vilaplana - Elsevier BV, 2024

End-to-end training of graph neural networks (GNN) on large graphs presents several memory and computational challenges, and limits the application to shallow architectures as depth exponentially increases the memory and space complexities. In this manuscript, we propose Layer-wise Regularized Graph Infomax, an algorithm to train GNNs layer by layer in a self-supervised manner. We decouple the feature propagation and feature transformation carried out by GNNs to learn node representations in order to derive a loss function based on the prediction of future inputs. We evaluate the algorithm in inductive large graphs and show similar performance to other end to end methods and a substantially increased efficiency, which enables the training of more sophisticated models in one single device. We also show that our algorithm avoids the oversmoothing of the representations, another common challenge of deep GNNs.

17. A Survey on Graph Neural Network Acceleration: A Hardware Perspective

Chen Shi, Jingyu Liu, Li Shen - Institute of Electrical and Electronics Engineers (IEEE), 2024

Graph neural networks (GNNs) have emerged as powerful approaches to learn knowledge about graphs and vertices. The rapid employment of GNNs poses requirements for processing efficiency. Due to incompatibility of general platforms, dedicated hardware devices and platforms are developed to efficiently accelerate training and inference of GNNs. We conduct a survey on hardware acceleration for GNNs. We first include and introduce recent advances of the domain, and then provide a methodology of categorization to classify existing works into three categories. Next, we discuss optimization techniques adopted at different levels. And finally we propose suggestions on future directions to facilitate further works.

18. Towards a Theory of Machine Learning on Graphs and its Applications in Combinatorial Optimization

Christopher Morris - International Joint Conferences on Artificial Intelligence Organization, 2024

Machine learning on graphs, especially using graph neural networks (GNNs), has seen a surge in interest due to the wide availability of graph data across many disciplines, from life and physical to social and engineering sciences. Despite their practical success, our theoretical understanding of the properties of GNNs remains incomplete. Here, we survey the author's and his collaborators' progress in developing a deeper theoretical understanding of GNNs' expressive power and generalization abilities. In addition, we overview recent progress in using GNNs to speed up solvers for hard combinatorial optimization tasks.

19. GC-Bench: A Benchmark Framework for Graph Condensation with New Insights

Shengbo Gong, Juntong Ni, Noveen Sachdeva, 2024

Graph condensation (GC) is an emerging technique designed to learn a significantly smaller graph that retains the essential information of the original graph. This condensed graph has shown promise in accelerating graph neural networks while preserving performance comparable to those achieved with the original, larger graphs. Additionally, this technique facilitates downstream applications such as neural architecture search and enhances our understanding of redundancy in large graphs. Despite the rapid development of GC methods, a systematic evaluation framework remains absent, which is necessary to clarify the critical designs for particular evaluative aspects. Furthermore, several meaningful questions have not been investigated, such as whether GC inherently preserves certain graph properties and offers robustness even without targeted design efforts. In this paper, we introduce GC-Bench, a comprehensive framework to evaluate recent GC methods across multiple dimensions and to generate new insights. Our experimental findings provide a deeper insights into the GC process and the cha... Read More

20. Scalable Graph Compressed Convolutions

Junshu Sun, Chenxue Yang, Shuhui Wang, 2024

Designing effective graph neural networks (GNNs) with message passing has two fundamental challenges, i.e., determining optimal message-passing pathways and designing local aggregators. Previous methods of designing optimal pathways are limited with information loss on the input features. On the other hand, existing local aggregators generally fail to extract multi-scale features and approximate diverse operators under limited parameter scales. In contrast to these methods, Euclidean convolution has been proven as an expressive aggregator, making it a perfect candidate for GNN construction. However, the challenges of generalizing Euclidean convolution to graphs arise from the irregular structure of graphs. To bridge the gap between Euclidean space and graph topology, we propose a differentiable method that applies permutations to calibrate input graphs for Euclidean convolution. The permutations constrain all nodes in a row regardless of their input order and therefore enable the flexible generalization of Euclidean convolution to graphs. Based on the graph calibration, we propose th... Read More

21. Sparse Implementation of Versatile Graph-Informed Layers

Francesco Della Santa, 2024

Graph Neural Networks (GNNs) have emerged as effective tools for learning tasks on graph-structured data. Recently, Graph-Informed (GI) layers were introduced to address regression tasks on graph nodes, extending their applicability beyond classic GNNs. However, existing implementations of GI layers lack efficiency due to dense memory allocation. This paper presents a sparse implementation of GI layers, leveraging the sparsity of adjacency matrices to reduce memory usage significantly. Additionally, a versatile general form of GI layers is introduced, enabling their application to subsets of graph nodes. The proposed sparse implementation improves the concrete computational efficiency and scalability of the GI layers, permitting to build deeper Graph-Informed Neural Networks (GINNs) and facilitating their scalability to larger graphs.

22. A Comprehensive Review of the Oversmoothing in Graph Neural Networks

Xu Zhang, Yonghui Xu, Wei He - Springer Nature Singapore, 2024

There are many ways to process graph data in deep learning, among which Graph Neural Network(GNN) is an effective and popular deep learning model. However, GNN also has some problems. For example, after multiple layers of neural networks, the features between nodes will become more and more similar, so that the model identifies two completely different nodes as one type. For example, when two nodes with different structural information output, they are almost the same at the feature level and thus difficult to be distinguished, and this phenomenon is called oversmoothing. For example, in node classification, two completely different types of nodes obtain highly similar node features after model training. How to alleviate and solve the oversmoothing problem has become an emerging hot research topic in graph research. However, there has yet to be an extensive investigation and evaluation of this topic. This paper aims to summarize different approaches to mitigate the oversmoothing phenomenon by providing a detailed research survey. We analyze and summarize proposed research schemes fro... Read More

23. RobGC: Towards Robust Graph Condensation

Xinyi Gao, Hongzhi Yin, Tong Chen, 2024

Graph neural networks (GNNs) have attracted widespread attention for their impressive capability of graph representation learning. However, the increasing prevalence of large-scale graphs presents a significant challenge for GNN training due to their computational demands, limiting the applicability of GNNs in various scenarios. In response to this challenge, graph condensation (GC) is proposed as a promising acceleration solution, focusing on generating an informative compact graph that enables efficient training of GNNs while retaining performance. Despite the potential to accelerate GNN training, existing GC methods overlook the quality of large training graphs during both the training and inference stages. They indiscriminately emulate the training graph distributions, making the condensed graphs susceptible to noises within the training graph and significantly impeding the application of GC in intricate real-world scenarios. To address this issue, we propose robust graph condensation (RobGC), a plug-and-play approach for GC to extend the robustness and applicability of condensed... Read More

24. Contextualized Messages Boost Graph Representations

Brian Godwin Lim, 2024

Graph neural networks (GNNs) have gained significant interest in recent years due to their ability to handle arbitrarily structured data represented as graphs. GNNs generally follow the message-passing scheme to locally update node feature representations. A graph readout function is then employed to create a representation for the entire graph. Several studies proposed different GNNs by modifying the aggregation and combination strategies of the message-passing framework, often inspired by heuristics. Nevertheless, several studies have begun exploring GNNs from a theoretical perspective based on the graph isomorphism problem which inherently assumes countable node feature representations. Yet, there are only a few theoretical works exploring GNNs with uncountable node feature representations. This paper presents a new perspective on the representational capabilities of GNNs across all levels - node-level, neighborhood-level, and graph-level - when the space of node feature representation is uncountable. From the results, a novel soft-isomorphic relational graph convolution network (... Read More

25. Research on Graph Neural Network Algorithm and Image Recognition Application

Jiahao Xu, Jianping Li - IEEE, 2023

The emergence of graph neural networks makes up for the limitations of traditional neural networks in dealing with complex graph data, and they are widely used in the fields of social networks and recommender systems, but at the same time, there are some defects in themselves. This paper introduces the basics of graph neural networks, and also summarizes the advantages and disadvantages of GSN, GraphSAGE, DeepWalk and Node2Vec algorithms. Finally, it also discusses the application of graph neural networks in the field of image recognition, summarizes its deficiencies in practical applications, and puts forward the prospect of the future development of graph neural networks.

26. Proceedings of the 2nd on Graph Neural Networking Workshop 2023

- ACM, 2023

It is our great pleasure to welcome you to the Second International Workshop on Graph Neural Networking - GNNet 2023, co-located with ACM CoNEXT 2023. Graphs are emerging as an abstraction to represent complex data. Computer Networks are fundamentally graphs, and many of their relevant characteristics - such as topology and routing - are represented as graph-structured data. Machine learning, especially deep representation learning on graphs, is an emerging field with a wide array of applications. Within this field, Graph Neural Networks (GNNs) have been recently proposed to model and learn over graph-structured data. Due to their unique ability to generalize over graph data, GNNs are a central tool to apply AI/ML techniques to networking applications.

27. Graph Neural Networks: Foundation, Frontiers and Applications

Lingfei Wu, Peng Cui, Jian Pei - ACM, 2023

The field of graph neural networks (GNNs) has seen rapid and incredible strides over the recent years. Graph neural networks, also known as deep learning on graphs, graph representation learning, or geometric deep learning, have become one of the fastest-growing research topics in machine learning, especially deep learning. However, as the field rapidly grows, it has been extremely challenging to gain a global perspective of the developments of GNNs. Therefore, we feel the urgency to bridge the above gap and have a comprehensive tutorial on this fast-growing yet challenging topic.

28. Node Classification on The Citation Network Using Graph Neural Network

Irani Hoeronis, Bambang Riyanto Trilaksono - STMIK AKBA, 2023

Research on Graph Neural Networks has influenced various current real-world problems. The graph-based approach is considered capable of effectively representing the actual state of surrounding data by utilizing nodes, edges, and features. Consider the feedforward neural network and the graph neural network approaches, we determine the accuracy of each method. In the baseline experiment, training and testing were performed using the NN approach. The resulting accuracy of FNN was 72.59% and GNN model has increased by 81.65%. There is a 9.06% increase in accuracy between the baseline model and the GNN model. The new data utilized in the model predictions showcases the probabilities of each class through randomly generated examples.

29. Rethinking Random Walk in Graph Representation Learning

Dingyi Zeng, Wenyu Chen, Wanlong Liu - IEEE, 2023

With the help of deep learning, Graph Neural Networks (GNNs) have achieved remarkable progress in various fields. However, due to the limitation of the message passing mechanism of GNNs, there exists an upper limit on its expressiveness. Some high-order GNNs have achieved good results in expressiveness, but they also have shortcomings in complexity and real-world performance. In this paper, we attempt to provide a graph neural network architecture that simultaneously addresses expressiveness, complexity and real-world performance. To this end, we propose Spatially constrained Random walk diffusion structural Encoding (SRE) to encode structural information and can be used for any GNN under our architecture. Our extensive and diverse experiments on datasets of different types and sizes demonstrate the superior expressiveness and state-of-the-art performance of our architecture on real-world tasks.

30. Message gain aggregation architecture: a scalable graph neural network for combining large-scale neighborhoods

XianRui Li, Yan Wang - SPIE, 2023

Graph Neural Networks (GNNs) extend traditional deep neural networks to graph-structured data and have greatly succeeded in graph representation learning scenarios. However, the current mainstream GNNs based on message passing mechanisms are challenging to scale to large graph data due to their severe scalability and over smoothing issues. In this paper, we propose a novel scheme called the Message Gain Aggregation Architecture (MGAA) to alleviate the over smoothing problem while maintaining a scalable architecture and good expressiveness. Our MGAA network considers the nodes themselves separately from their neighbors and treats the neighbors as gains to be imposed on the nodes. A theoretical complexity analysis demonstrates that our proposed MGAA network maintains linear temporal and spatial complexity. Furthermore, the experimental results show that our model achieves sufficiently competitive performance in a comparison with the current popular state-of-the-art GNNs.

31. Graph convolutional neural networks for distributed recommender and prediction systems

Mohd Usman, Rahul Shukla, Nayab Zya - Routledge, 2023

Deep neural networks have gained prominence in recent years in areas such as image processing, computer vision, speech recognition, machine translation, self-driving vehicles, and healthcare. Deep learning, a subset of machine learning and AI, is revolutionising our lives. Graph deep learning is a new field. For graph-structured data, newly developed GNNs were developed. While GNNs outperform conventional approaches in tasks like semi-supervised node classification, their application to other graph learning problems have not been studied or their performance isn't acceptable. This paper explores graph deep learning in more detail. There are many more graph learning challenges that can be solved using graph neural networks.

32. CurvDrop: A Ricci Curvature Based Approach to Prevent Graph Neural Networks from Over-Smoothing and Over-Squashing

Yang Liu, Chuan Zhou, Shirui Pan - ACM, 2023

Graph neural networks (GNNs) are powerful models to handle graph data and can achieve state-of-the-art in many critical tasks including node classification and link prediction. However, existing graph neural networks still face both challenges of over-smoothing and over-squashing based on previous literature. To this end, we propose a new Curvature-based topology-aware Dropout sampling technique named CurvDrop, in which we integrate the Discrete Ricci Curvature into graph neural networks to enable more expressive graph models. Also, this work can improve graph neural networks by quantifying connections in graphs and using structural information such as community structures in graphs. As a result, our method can tackle the both challenges of over-smoothing and over-squashing with theoretical justification. Also, numerous experiments on public datasets show the effectiveness and robustness of our proposed method. The code and data are released in https://github.com/liu-yang-maker/Curvature-based-Dropout.

33. Processing-in-memory (PIM)-based Manycore Architecture for Training Graph Neural Networks

Partha Pratim Pande - IEEE, 2023

Graph Neural Networks (GNNs) enable comprehensive predictive analytics over graph structured data. They have become popular in diverse real-world applications. A key challenge in facilitating such analytics is to learn good representations over nodes, edges, and graphs. Unlike traditional Deep Neural Networks (DNNs), which work over regular structures (images or sequences), GNNs operate on graphs. The computations associated with GNN can be divided into two parts: 1) Vertex-centric computations involving trainable weights, like conventional DNNs, and 2) Edge-centric computations, which involve accumulating neighboring vertices information along the edges of the graphs. Hence, GNN training exhibits characteristics of both DNN training, which is compute-intensive, and graph computation that exhibits heavy data exchange. Conventional CPU- or GPU-based systems are not tailor-made for applications that exhibits such trait. This necessitates the development of new and efficient hardware architectures tailored for GNN training/inference. Both the vertex- and edge-centric computations in GNN... Read More

34. MVMNET: Graph Classification Pooling Method with Maximum Variance Mapping

Lingang Wang, Lei Sun - Academy and Industry Research Collaboration Center (AIRCC), 2023

Graph Neural Networks (GNNs) have been shown to effectively model graph-structured data for tasks such as graph node classification, link prediction, and graph classification. The graph pooling method is an indispensable structure in the graph neural network model. The traditional graph neural network pooling methods all employ downsampling or node aggregating to reduce graph nodes. However, these methods do not fully consider spatial distribution of nodes of different classes of graphs, and making it difficult to distinguish the class of graphs with spatial locations close to each other. To solve such problems, this article proposes a Maximum Variance graph feature Multistructure graph classification method (MVM), which extracts graph information from the perspective of graph nodes feature and graph topology. From the nodes feature perspective, we enlarge the variance between different classes while maintaining the variance between the same class of data. Then the hierarchical graph convolution and pooling are performed from a topological perspective and combined with a CNN readout ... Read More

35. A review of challenges and solutions in the design and implementation of deep graph neural networks

Aafaq Mohi ud din, Shaima Qureshi - Informa UK Limited, 2023

The study of graph neural networks has revealed that they can unleash new applications in a variety of disciplines using such a basic process that we cannot imagine in the context of other deep learning designs. Many limitations limit their expressiveness, and researchers are working to overcome them to fully exploit the power of graph data. There are a number of publications that explore graph neural networks (GNNs) restrictions and bottlenecks, but the common thread that runs through them all is that they can all be traced back to message passing, which is the key technique we use to train our graph models. We outline the general GNN design pipeline in this study as well as discuss solutions to the over-smoothing problem, categorize the solutions, and identify open challenges for further research.Abbreviations: CGNN: Continuous Graph Neural Networks; CNN: Convolution NeuralNetwork; DeGNN: Decomposition Graph Neural Network; DGN: Directional GraphNetworks; DGN: Differentiable Group Normalization; DL: Deep Learning; EGAI:Enhancing GNNs by a High-quality Aggregation of Beneficial Info... Read More

36. Graph Neural Network and Its Applications

Sougatamoy Biswas - IGI Global, 2023

Graph neural network (GNN) is an emerging field in deep learning. Graphs have more expressive power than any other data structure. Graph neural network is one of the application areas of deep learning, and it has applications in different domains where traditional convolutional neural networks can't give the desired result. Graphs are basically connections of nodes through the edges. In the area of recommendation systems, image processing and fraud detection are some of the few application areas of graph neural networks. As graphs are moveable and mobile in nature, they are more flexible to apply in these domains. GNN deals with these types of problems more effectively than a convolution neural network. To apply GNN to a specific problem domain, data needs to be converted into a graphical format, and then neural network operations can be executed. The main feature of GNN is to inherit information from its neighborhood. This is called graph embedding. This chapter describes basic GNN architecture, GNN advantage over CNN, and its application in different domains.

37. Graph Classification of Graph Neural Networks

Gotam Singh Lalotra, Ashok Sharma, Barun Kumar Bhatti - IGI Global, 2023

Graph neural networks have recently come to the fore as the top machine learning architecture for supervised learning using graph and relational data. An overview of GNNs for graph classification (i.e., GNNs that learn a graph level output) is provided in this chapter as pooling layers, or layers that learn graph-level representations from node-level representations, are essential elements for successful graph classification because GNNs compute node-level representations. Hence, the authors give a thorough overview of pooling layers. The constraints of GNNs for graph categorization are further discussed, along with developments made in overcoming them. Finally, they review some GNN applications for graph classification and give an overview of benchmark datasets for empirical analysis.

38. Introduction to Graph Neural Network

G S - IGI Global, 2023

Deep learning on graphs is an upcoming area of study. This chapter provides an introduction to graph neural networks (GNNs), a type of neural network that is designed to process data represented in the form of graphs. First, it summarizes the explanation of deep learning on graphs. The fundamental concepts of graph neural networks, as well as GNN theories, are then explained. In this chapter, different types of graph neural network (GNN) are also explained. At the end, the applications of graph neural network where GNN is used and for what purpose it is going to be used are explained. This also explores the various applications of GNNs in fields such as social network analysis, recommendation systems, drug discovery, computer vision, and natural language processing. With the increasing prevalence of graph data, GNNs are becoming increasingly important and will likely continue to play a significant role in many fields in the future.

39. Application and Some Fundamental Study of GNN In Forecasting

Arun Kumar Garov, A. K. Awasthi, Ram Kumar - IGI Global, 2023

The chapter consists of the application of GNN with all applied fundamentals in different fields of application. Firstly, the discussion will be about the graph using graph theory connection to the mathematical aspect. Secondly, the basis of the data set will be for forecasting and predictive analysis, application, and fundamental concepts, which will help in decision making regarding the different unsolved problems. Third, knowledge about the models of the graph neural network with the examples will be a very important part of the chapter. This chapter is useful for fulfilling the research gap in the field of some forecasting models using graph neural networks with the application of machine learning on data analysis with a large number of examples.

40. Generalizing Graph Neural Network across Graphs and Time

Zhihao Wen - ACM, 2023

Graph-structured data widely exist in diverse real-world scenarios, analysis of these graphs can uncover valuable insights about their respective application domains. However, most previous works focused on learning node representation from a single fixed graph, while many real-world scenarios require representations to be quickly generated for unseen nodes, new edges, or entirely new graphs. This inductive ability is essential for high-throughtput machine learning systems. However, this inductive graph representation problem is quite difficult, compared to the transductive setting, for that generalizing to unseen nodes requires new subgraphs containing the new nodes to be aligned to the neural network trained already. Meanwhile, following a message passing framework, graphneural network (GNN) is an inductive and powerful graph representation tool. We further explore inductive GNN from more specific perspectives: (1) generalizing GNN across graphs, in which we tackle with the problem of semi-supervised node classification across graphs; (2) generalizing GNN across time, in which we m... Read More

41. Local-to-global Perspectives on Graph Neural Networks

Cai Chen, 2023

This thesis presents a local-to-global perspective on graph neural networks (GNN), the leading architecture to process graph-structured data. After categorizing GNN into local Message Passing Neural Networks (MPNN) and global Graph transformers, we present three pieces of work: 1) study the convergence property of a type of global GNN, Invariant Graph Networks, 2) connect the local MPNN and global Graph Transformer, and 3) use local MPNN for graph coarsening, a standard subroutine used in global modeling.

42. Infinite Width Graph Neural Networks for Node Regression/ Classification

Yunus Cobanoglu, 2023

This work analyzes Graph Neural Networks, a generalization of Fully-Connected Deep Neural Nets on Graph structured data, when their width, that is the number of nodes in each fullyconnected layer is increasing to infinity. Infinite Width Neural Networks are connecting Deep Learning to Gaussian Processes and Kernels, both Machine Learning Frameworks with long traditions and extensive theoretical foundations. Gaussian Processes and Kernels have much less hyperparameters then Neural Networks and can be used for uncertainty estimation, making them more user friendly for applications. This works extends the increasing amount of research connecting Gaussian Processes and Kernels to Neural Networks. The Kernel and Gaussian Process closed forms are derived for a variety of architectures, namely the standard Graph Neural Network, the Graph Neural Network with Skip-Concatenate Connections and the Graph Attention Neural Network. All architectures are evaluated on a variety of datasets on the task of transductive Node Regression and Classification. Additionally, a Spectral Sparsification method ... Read More

43. Graph Representation Learning

Davide Bacciu, Federico Errica, Alessio Micheli - Ciaco - i6doc.com, 2023

In a broad range of real-world machine learning applications, representing examples as graphs is crucial to avoid a loss of information.For this reason, in the last few years, the definition of machine learning methods, particularly neural networks, for graph-structured inputs has been gaining increasing attention.In particular, Deep Graph Networks (DGNs) are nowadays the most commonly adopted models to learn a representation that can be used to address different tasks related to nodes, edges, or even entire graphs.This tutorial paper reviews fundamental concepts and open challenges of graph representation learning and summarizes the contributions that have been accepted for publication to the ESANN 2023 special session on the topic.

44. The Expressive Power of Graph Neural Networks: A Survey

Bingxu Zhang, Changjun Fan, Shixuan Liu, 2023

Graph neural networks (GNNs) are effective machine learning models for many graph-related applications. Despite their empirical success, many research efforts focus on the theoretical limitations of GNNs, i.e., the GNNs expressive power. Early works in this domain mainly focus on studying the graph isomorphism recognition ability of GNNs, and recent works try to leverage the properties such as subgraph counting and connectivity learning to characterize the expressive power of GNNs, which are more practical and closer to real-world. However, no survey papers and open-source repositories comprehensively summarize and discuss models in this important direction. To fill the gap, we conduct a first survey for models for enhancing expressive power under different forms of definition. Concretely, the models are reviewed based on three categories, i.e., Graph feature enhancement, Graph topology enhancement, and GNNs architecture enhancement.

45. MathNet: Haar-like wavelet multiresolution analysis for graph representation learning

Xuebin Zheng, Bingxin Zhou, Ming Li - Elsevier BV, 2023

Graph neural networks (GNNs) have recently attracted great attention and achieved significant progress in graph-level applications. In this paper, we propose a framework for graph neural networks with multiresolution Haar-like wavelets, or MathNet, with interrelated convolution and pooling strategies. The rendering method takes graphs in different structures as input and assembles consistent graph representations for readout layers, which then accomplishes label prediction. To achieve this, multiresolution graph representations are first constructed and fed into graph convolutional layers for processing. The hierarchical graph pooling layers are then involved to downsample graph resolution while simultaneously removing redundancy of graph signals. The whole workflow could be formed with a multilevel graph analysis, which not only helps embed the intrinsic topological information of each graph into the GNN, but also supports fast computation of forward and adjoint graph transforms. Extensive experiments present notable accuracy gains of the proposed MathNet on graph classification and... Read More

46. UGSL: A Unified Framework for Benchmarking Graph Structure Learning

Bahare Fatemi, Sami Abu-El-Haija, Anton Tsitsulin, 2023

Graph neural networks (GNNs) demonstrate outstanding performance in a broad range of applications. While the majority of GNN applications assume that a graph structure is given, some recent methods substantially expanded the applicability of GNNs by showing that they may be effective even when no graph structure is explicitly provided. The GNN parameters and a graph structure are jointly learned. Previous studies adopt different experimentation setups, making it difficult to compare their merits. In this paper, we propose a benchmarking strategy for graph structure learning using a unified framework. Our framework, called Unified Graph Structure Learning (UGSL), reformulates existing models into a single model. We implement a wide range of existing models in our framework and conduct extensive analyses of the effectiveness of different components in the framework. Our results provide a clear and concise understanding of the different methods in this area as well as their strengths and weaknesses. The benchmark code is available at https://github.com/google-research/google-research/tr... Read More

47. Uplifting the Expressive Power of Graph Neural Networks through Graph Partitioning

Asela Hevapathige, Qing Wang, 2023

Graph Neural Networks (GNNs) have paved its way for being a cornerstone in graph related learning tasks. From a theoretical perspective, the expressive power of GNNs is primarily characterised according to their ability to distinguish non-isomorphic graphs. It is a well-known fact that most of the conventional GNNs are upper-bounded by Weisfeiler-Lehman graph isomorphism test (1-WL). In this work, we study the expressive power of graph neural networks through the lens of graph partitioning. This follows from our observation that permutation invariant graph partitioning enables a powerful way of exploring structural interactions among vertex sets and subgraphs, and can help uplifting the expressive power of GNNs efficiently. Based on this, we first establish a theoretical connection between graph partitioning and graph isomorphism. Then we introduce a novel GNN architecture, namely Graph Partitioning Neural Networks (GPNNs). We theoretically analyse how a graph partitioning scheme and different kinds of structural interactions relate to the k-WL hierarchy. Empirically, we demonstrate ... Read More

48. To Think Like a Vertex (or Not) for Distributed Training of Graph Neural Networks

Varad Kulkarni, Akarsh Chaturvedi, Pranjal Naman - IEEE, 2023

Graph Neural Networks (GNNs) train neural networks that combine the topological properties of a graph with the vertex and edge features to perform tasks such as node classification and link prediction. We propose a novel middleware that approaches GNN training from the perspective of a vertex-centric model (VCM) of distributed graph processing and overlays neural network training over it. Giraph Graph Neural Network (G2N2) uses a three-phase execution pattern by construction a distributed computation graph per mini-batch, and maps the forward and backward passes of the GNN training to VCM. We implement a prototype of G2N2 in Apache Giraph and report results from a preliminary evaluation using two real-world graphs on a commodity cluster.

49. Feature Expansion for Graph Neural Networks

Jiaqi Sun, Lin Zhang, Guangyi Chen, 2023

Graph neural networks aim to learn representations for graph-structured data and show impressive performance, particularly in node classification. Recently, many methods have studied the representations of GNNs from the perspective of optimization goals and spectral graph theory. However, the feature space that dominates representation learning has not been systematically studied in graph neural networks. In this paper, we propose to fill this gap by analyzing the feature space of both spatial and spectral models. We decompose graph neural networks into determined feature spaces and trainable weights, providing the convenience of studying the feature space explicitly using matrix space analysis. In particular, we theoretically find that the feature space tends to be linearly correlated due to repeated aggregations. Motivated by these findings, we propose 1) feature subspaces flattening and 2) structural principal components to expand the feature space. Extensive experiments verify the effectiveness of our proposed more comprehensive feature space, with comparable inference time to th... Read More

50. Knowledge Distillation on Graphs: A Survey

Yijun Tian, Shichao Pei, Xiangliang Zhang, 2023

Graph Neural Networks (GNNs) have attracted tremendous attention by demonstrating their capability to handle graph data. However, they are difficult to be deployed in resource-limited devices due to model sizes and scalability constraints imposed by the multi-hop data dependency. In addition, real-world graphs usually possess complex structural information and features. Therefore, to improve the applicability of GNNs and fully encode the complicated topological information, knowledge distillation on graphs (KDG) has been introduced to build a smaller yet effective model and exploit more knowledge from data, leading to model compression and performance improvement. Recently, KDG has achieved considerable progress with many studies proposed. In this survey, we systematically review these works. Specifically, we first introduce KDG challenges and bases, then categorize and summarize existing works of KDG by answering the following three questions: 1) what to distillate, 2) who to whom, and 3) how to distillate. Finally, we share our thoughts on future research directions.

51. Graph Neural Network: The Next Frontier in Deep Learning

52. Graph Pooling for Graph Neural Networks: Progress, Challenges, and Opportunities

53. Degree-based stratification of nodes in Graph Neural Networks

54. NodeFormer: A Scalable Graph Structure Learning Transformer for Node Classification

55. Zero-One Laws of Graph Neural Networks

Get Full Report

Access our comprehensive collection of 161 documents related to this technology