Convexifying Sparse Interpolation with Infinitely Wide Neural Networks: An Atomic Norm Approach

Akshay Kumar, Jarvis Haupt

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

This work examines the problem of exact data interpolation via sparse (neuron count), infinitely wide, single hidden layer neural networks with leaky rectified linear unit activations. Using the atomic norm framework of [Chandrasekaran et al. 2012], we derive simple characterizations of the convex hulls of the corresponding atomic sets for this problem under several different constraints on the weights and biases of the network, thus obtaining equivalent convex formulations for these problems. A modest extension of our proposed framework to a binary classification problem is also presented. We explore the efficacy of the resulting formulations experimentally, and compare with networks trained via gradient descent.

Original languageEnglish (US)
Article number9264658
Pages (from-to)2114-2118
Number of pages5
JournalIEEE Signal Processing Letters
Volume27
DOIs
StatePublished - 2020
Externally publishedYes

Bibliographical note

Publisher Copyright:
© 1994-2012 IEEE.

Keywords

  • Atomic norm
  • binary classification
  • convex optimization
  • interpolation
  • single hidden layer neural networks

Fingerprint

Dive into the research topics of 'Convexifying Sparse Interpolation with Infinitely Wide Neural Networks: An Atomic Norm Approach'. Together they form a unique fingerprint.

Cite this