K-nearest neighbor learning with graph neural networks

Research output: Contribution to journalArticlepeer-review

Abstract

k-nearest neighbor (kNN) is a widely used learning algorithm for supervised learning tasks. In practice, the main challenge when using kNN is its high sensitivity to its hyperparameter setting, including the number of nearest neighbors k, the distance function, and the weighting function. To improve the robustness to hyperparameters, this study presents a novel kNN learning method based on a graph neural network, named kNNGNN. Given training data, the method learns a task-specific kNN rule in an end-to-end fashion by means of a graph neural network that takes the kNN graph of an instance to predict the label of the instance. The distance and weighting functions are implicitly embedded within the graph neural network. For a query instance, the prediction is obtained by performing a kNN search from the training data to create a kNN graph and passing it through the graph neural network. The effectiveness of the proposed method is demonstrated using various benchmark datasets for classification and regression tasks.

Original languageEnglish
Article number830
JournalMathematics
Volume9
Issue number8
DOIs
StatePublished - 2 Apr 2021

Keywords

  • Deep learning
  • Graph neural network
  • Instance-based learning
  • K-nearest neighbor

Fingerprint

Dive into the research topics of 'K-nearest neighbor learning with graph neural networks'. Together they form a unique fingerprint.

Cite this