TY - JOUR

T1 - Private hypothesis selection

AU - Bun, Mark

AU - Steinke, Thomas

AU - Kamath, Gautam

AU - Wu, Zhiwei Steven

PY - 2019

Y1 - 2019

N2 - We provide a differentially private algorithm for hypothesis selection. Given samples from an unknown probability distribution P and a set of m probability distributions H, the goal is to output, in a e-differentially private manner, a distribution from H whose total variation distance to P is comparable to that of the best such distribution (which we denote by a). The sample complexity of our basic algorithm is O ( loga2m + logaem ), representing a minimal cost for privacy when compared to the non-private algorithm. We also can handle infinite hypothesis classes H by relaxing to (e, d)-differential privacy. We apply our hypothesis selection algorithm to give learning algorithms for a number of natural distribution classes, including Gaussians, product distributions, sums of independent random variables, piecewise polynomials, and mixture classes. Our hypothesis selection procedure allows us to generically convert a cover for a class to a learning algorithm, complementing known learning lower bounds which are in terms of the size of the packing number of the class. As the covering and packing numbers are often closely related, for constant a, our algorithms achieve the optimal sample complexity for many classes of interest. Finally, we describe an application to private distribution-free PAC learning.

AB - We provide a differentially private algorithm for hypothesis selection. Given samples from an unknown probability distribution P and a set of m probability distributions H, the goal is to output, in a e-differentially private manner, a distribution from H whose total variation distance to P is comparable to that of the best such distribution (which we denote by a). The sample complexity of our basic algorithm is O ( loga2m + logaem ), representing a minimal cost for privacy when compared to the non-private algorithm. We also can handle infinite hypothesis classes H by relaxing to (e, d)-differential privacy. We apply our hypothesis selection algorithm to give learning algorithms for a number of natural distribution classes, including Gaussians, product distributions, sums of independent random variables, piecewise polynomials, and mixture classes. Our hypothesis selection procedure allows us to generically convert a cover for a class to a learning algorithm, complementing known learning lower bounds which are in terms of the size of the packing number of the class. As the covering and packing numbers are often closely related, for constant a, our algorithms achieve the optimal sample complexity for many classes of interest. Finally, we describe an application to private distribution-free PAC learning.

UR - http://www.scopus.com/inward/record.url?scp=85090174910&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85090174910&partnerID=8YFLogxK

M3 - Conference article

AN - SCOPUS:85090174910

VL - 32

JO - Advances in Neural Information Processing Systems

JF - Advances in Neural Information Processing Systems

SN - 1049-5258

T2 - 33rd Annual Conference on Neural Information Processing Systems, NeurIPS 2019

Y2 - 8 December 2019 through 14 December 2019

ER -