Deep Graph Contrastive Representation Learning

Advertisement

BEGIN ARTICLE PREVIEW:

Content provided by Yanqiao Zhu, the first author of the paper Deep Graph Contrastive Representation Learning.This paper presents a novel contrastive framework for unsupervised graph representation learning. The proposed GRACE framework maximizes the agreement among node representations in two graph views, generated by corruption at both graph structure and attribute levels. Moreover, theoretical analysis based on InfoMax principle and classical triplet loss justifies motivation behind the framework. Extensive experiments demonstrate its superiority over existing state-of-the-art methods. GRACE even surpassed supervised counterparts on transductive tasks.

What’s New:(1) Contrastive learning techniques are rarely explored in graph representation learning.

(2) Existing work mostly relies on global-local mutual information maximization (InfoMax), which requires an injective readout function to generate global graph embeddings. However, the injective property is too restrictive to fulfill. The GRACE framework is much simpler, which focuses on maximizing agreement at the local level.

(3) Interpreting InfoMax-based framework as optimizing the classical triplet loss further highlights the importance of negative samples involved in the objective, which is often neglected in previous methods. GRACE proposes two levels of graph corruption to generate more diverse contexts of nodes in different graph views.How It Works: GRACE firstly perform graph corruption at both graph topology …

END ARTICLE PREVIEW

READ MORE FROM SOURCE ARTICLE