We provide a differentially private algorithm for hypothesis selection. Given samples from an unknown probability distribution P and a set of m probability distributions H, the goal is to output, in a e-differentially private manner, a distribution from H whose total variation distance to P is comparable to that of the best such distribution (which we denote by a). The sample complexity of our basic algorithm is O ( loga2m + logaem ), representing a minimal cost for privacy when compared to the non-private algorithm. We also can handle infinite hypothesis classes H by relaxing to (e, d)-differential privacy. We apply our hypothesis selection algorithm to give learning algorithms for a number of natural distribution classes, including Gaussians, product distributions, sums of independent random variables, piecewise polynomials, and mixture classes. Our hypothesis selection procedure allows us to generically convert a cover for a class to a learning algorithm, complementing known learning lower bounds which are in terms of the size of the packing number of the class. As the covering and packing numbers are often closely related, for constant a, our algorithms achieve the optimal sample complexity for many classes of interest. Finally, we describe an application to private distribution-free PAC learning.
|Original language||English (US)|
|Journal||Advances in Neural Information Processing Systems|
|State||Published - 2019|
|Event||33rd Annual Conference on Neural Information Processing Systems, NeurIPS 2019 - Vancouver, Canada|
Duration: Dec 8 2019 → Dec 14 2019
Bibliographical noteFunding Information:
The authors would like to thank Shay Moran for bringing to their attention the application to PAC learning mentioned in the supplement, Jonathan Ullman for asking questions which motivated Remark 1, and Clément Canonne for assistance in reducing the constant factor in the approximation guarantee. This work was done while the authors were all affiliated the Simons Institute for the Theory of Computing. MB was supported by a Google Research Fellowship, as part of the Simons-Berkeley Research Fellowship program. GK was supported by a Microsoft Research Fellowship, as part of the Simons-Berkeley Research Fellowship program, and the work was also partially done while visiting Microsoft Research, Redmond. TS was supported by a Patrick J. McGovern Research Fellowship, as part of the Simons-Berkeley Research Fellowship program. ZSW was supported in part by a Google Faculty Research Award, a J.P. Morgan Faculty Award, and a Facebook Research Award.
∗Simons Institute for the Theory of Computing and Boston University. email@example.com. Supported by a Google Research Fellowship, as part of the Simons-Berkeley Research Fellowship program.
†Simons Institute for the Theory of Computing and University of Waterloo. firstname.lastname@example.org. Supported as a Microsoft Research Fellow, as part of the Simons-Berkeley Research Fellowship program. Part of this work was completed while visiting Microsoft Research, Redmond.
§University of Minnesota, Twin Cities. email@example.com. Part of this work completed while visiting the Simons Institute for the Theory of Computing at UC Berkeley. Supported in part by a Google Faculty Research Award, a J.P. Morgan Faculty Award, and a Facebook Research Award.
© 2019 Neural information processing systems foundation. All rights reserved.
Copyright 2020 Elsevier B.V., All rights reserved.