HiGPT:

Heterogeneous Graph Language Model

🏠 Data Intelligence Lab, University of Hong Kong. 🏢 Baidu, Inc.
(*Correspondence)

🔥One Model for Any Heterogeneous Graph🔥

🚀Cross-domain Zero-shot Heterogeneous Graph Learning🚀

🌟Few-shot Training with Instruction Augmentation🌟

🎉1-shot Beat 60-shot with Graph In-Context Learning🎉


Abstract

Heterogeneous graph learning aims to capture complex relationships and diverse relational semantics among entities in a heterogeneous graph to obtain meaningful representations for nodes and edges. Recent advancements in heterogeneous graph neural networks (HGNNs) have achieved state-of-the-art performance by considering relation heterogeneity and using specialized message functions and aggregation rules. However, existing frameworks for heterogeneous graph learning have limitations in generalizing across diverse heterogeneous graph datasets. Most of these frameworks follow the "pre-train" and "fine-tune" paradigm on the same dataset, which restricts their capacity to adapt to new and unseen data. This raises the question: "Can we generalize heterogeneous graph models to be well-adapted to diverse downstream learning tasks with distribution shifts in both node token sets and relation type heterogeneity?" To tackle those challenges, we propose HiGPT, a general large graph model with Heterogeneous graph instruction-tuning paradigm. Our framework enables learning from arbitrary heterogeneous graphs without the need for any fine-tuning process from downstream datasets. To handle distribution shifts in heterogeneity, we introduce an in-context heterogeneous graph tokenizer that captures semantic relationships in different heterogeneous graphs, facilitating model adaptation. We incorporate a large corpus of heterogeneity-aware graph instructions into our HiGPT, enabling the model to effectively comprehend complex relation heterogeneity and distinguish between various types of graph tokens. Furthermore, we introduce the Mixture-of-Thought (MoT) instruction augmentation paradigm to mitigate data scarcity by generating diverse and informative instructions. Through comprehensive evaluations conducted in various settings, our proposed framework demonstrates exceptional performance in terms of generalization performance, surpassing current leading benchmarks.


Technical Description


• Methodology

Teaser

Figure 1: The overall architecture of our HiGPT.


  • In-Context Heterogeneous Graph Tokenizer. To achieve adaptability in a wide range of heterogeneous graph sce- narios with varying node and edge types, we introduce the in- context heterogeneous graph tokenizer. This tokenizer captures the diverse semantic relationships found in different heterogeneous graphs, providing a unified approach. To optimize performance and integrate the tokenizer seamlessly into the HiGPT framework, we employ pre-training with a lightweight text-graph contrastive alignment paradigm.
  • Heterogeneous Graph Instruction-Tuning. We intro- duce a novel heterogeneous graph instruction-tuning framework that integrates inter-type and intra-type token matching tasks to fine-tune large language models (LLMs). Our framework specifically targets the enhancement of LLMs' understanding of both heterogeneous relation awareness and homogeneous relation awareness. By utilizing these tasks, our aim is to bolster the LLMs' capabilities in the following areas: (i) distinguishing between different types of graph tokens, (ii) comprehending intricate relationships within heterogeneous graphs, (iii) preserving the distinctive attributes of entities within homogeneous graphs, and (iv) effectively harnessing diverse graph instructions during the training process. Teaser

    Figure 2: Prompts for the three tasks of heterogeneous graph instruction-tuning.

  • Mixture-of-Thought Augmentation. Our approach introduces a novel mechanism for augmenting graph instructions, emphasizing the use of Mixture-of-Thought (MoT) combined with various prompting techniques. This integration enables us to gen- erate a diverse and comprehensive set of informative task-specific instructions. By seamlessly incorporating these augmented graphinstructions into our framework, we anticipate that our model enhancement will effectively address the challenge of data sparsity. Teaser

    Figure 3: Mixture-of-Thought (MoT) Augmentation.



• Experiments

Overall Performance Comparison

We performed node classification tasks on three datasets, exploring both few-shot and zero-shot settings. In the few-shot settings, our model was trained on the IMDB dataset with shot numbers ranging from 1 to 60, and evaluated on the IMDB test set of 1,000 samples. For the zero-shot settings, the model was trained on the IMDB dataset with the same shot numbers, and tested on separate test sets from the DBLP and ACM datasets, each containing 1,000 samples. To enable cross-dataset transferability in supervised heterogeneous Graph Neural Networks (GNNs), we unified node and edge categories, and utilized a classifier trained with transfer data to accommodate variations in class quantities across datasets. For self-supervised methods focused on learning embeddings for downstream heterogeneous graph nodes, we excluded the zero-shot settings. The overall performance is partially shown in Table 1, with detailed results. "-std" and "-cot" notations represent the standard test prompt with direct answers and the prompt with a Chain-of-Thought (CoT) feature, respectively. These details provide insights into our node classification experiments in both supervised and zero-shot settings.

Teaser

Table 1: Performance comparison on node classification tasks in both few-shot and zero-shot settings. However, since SSL methods focus on learning embeddings from downstream graphs, we excluded the zero-shot settings for them ("-").

Model Ablation Test.

To evaluate the proposed modules' effectiveness, we individually remove the key techniques in HiGPT. The results are summarized in Table 2.

Teaser

Table 2: Ablation study of our HiGPT.

Graph In-Context Learning

In-context learning (ICL) is a method for adapting large language models (LLMs) to new tasks without gradient updates, using a prompt with task examples. In this subsection, we explore the impact of Graph In-Context Learning on improving HiGPT's performance. We conduct comprehensive tests by adding prefatory examples from the training set to models trained with different shots of IMDB data. We randomly sampled training examples corresponding to the test data. "-ICL-1" and "-ICL-2" denote one and two prefatory examples, respectively. "-ICL-DBLP" signifies the inclusion of DBLP examples before the ACM test prompt. The results, depicted in Figure 5.

Teaser

Figure 5: Comprehensive results of graph in-context learning of our HiGPT.

Case Study.

We perform a case study to showcase our HiGPT's robust generalization in understanding complex graph structures with diverse nodes and connections. Our model generates graph-aware predictions and responses, demonstrating its profound comprehension and awareness of graph-related aspects. Furthermore, we validate the positive impact of our MoT instruction augmentation.

Teaser

Table 3: Visualization of our HiGPT’s response with different prompting engineering techniques on IMDB for action genre.

BibTeX

@articles{tang2024higpt,
            title={HiGPT: Heterogeneous Graph Language Model}, 
            author={Jiabin Tang and Yuhao Yang and Wei Wei and Lei Shi and Long Xia and Dawei Yin and Chao Huang},
            year={2024},
            eprint={2402.16024},
            archivePrefix={arXiv},
            primaryClass={cs.CL}
      }