Meta Continual Learning on Graphs with Experience Replay

Altay Unal, Abdullah Akgül, Melih Kandemir, Gozde Unal

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

Continual learning is a machine learning approach where the challenge is that a constructed learning model executes incoming tasks while maintaining its performance over the earlier tasks. In order to address this issue, we devise a technique that combines two uniquely important concepts in machine learning, namely "replay buffer" and "meta learning", aiming to exploit the best of two worlds. In this method, the model weights are initially computed by using the current task dataset. Next, the dataset of the current task is merged with the stored samples from the earlier tasks and the model weights are updated using the combined dataset. This aids in preventing the model weights converging to the optimal parameters of the current task and enables the preservation of information from earlier tasks. We choose to adapt our technique to graph data structure and the task of node classification on graphs. We introduce MetaCLGraph, which outperforms the baseline methods over various graph datasets including Citeseer, Corafull, Arxiv, and Reddit. This method illustrates the potential of combining replay buffer and meta learning in the field of continual learning on graphs.

Original languageEnglish
JournalTransactions on Machine Learning Research
Volume2023
Publication statusPublished - 1 Nov 2023

Bibliographical note

Publisher Copyright:
© 2023, Transactions on Machine Learning Research. All rights reserved.

Fingerprint

Dive into the research topics of 'Meta Continual Learning on Graphs with Experience Replay'. Together they form a unique fingerprint.

Cite this