TY - GEN

T1 - Learning Poisson Binomial Distributions

AU - Daskalakis, Constantinos

AU - Diakonikolas, Ilias

AU - Servedio, Rocco A.

PY - 2012

Y1 - 2012

N2 - We consider a basic problem in unsupervised learning: learning an unknown Poisson Binomial Distribution. A Poisson Binomial Distribution (PBD) over {0,1,...,n} is the distribution of a sum of n independent Bernoulli random variables which may have arbitrary, potentially non-equal, expectations. These distributions were first studied by S. Poisson in 1837 and are a natural n-parameter generalization of the familiar Binomial Distribution. Surprisingly, prior to our work this basic learning problem was poorly understood, and known results for it were far from optimal.
We essentially settle the complexity of the learning problem for this basic class of distributions. As our main result we give a highly efficient algorithm which learns to ε-accuracy using O(1/ε3) samples independent of n. The running time of the algorithm is quasilinear in the size of its input data, i.e. ~O(log(n)/ε3) bit-operations (observe that each draw from the distribution is a log(n)-bit string). This is nearly optimal since any algorithm must use Ω(1/ε2) samples. We also give positive and negative results for some extensions of this learning problem.

AB - We consider a basic problem in unsupervised learning: learning an unknown Poisson Binomial Distribution. A Poisson Binomial Distribution (PBD) over {0,1,...,n} is the distribution of a sum of n independent Bernoulli random variables which may have arbitrary, potentially non-equal, expectations. These distributions were first studied by S. Poisson in 1837 and are a natural n-parameter generalization of the familiar Binomial Distribution. Surprisingly, prior to our work this basic learning problem was poorly understood, and known results for it were far from optimal.
We essentially settle the complexity of the learning problem for this basic class of distributions. As our main result we give a highly efficient algorithm which learns to ε-accuracy using O(1/ε3) samples independent of n. The running time of the algorithm is quasilinear in the size of its input data, i.e. ~O(log(n)/ε3) bit-operations (observe that each draw from the distribution is a log(n)-bit string). This is nearly optimal since any algorithm must use Ω(1/ε2) samples. We also give positive and negative results for some extensions of this learning problem.

KW - applied probability, computational learning theory, learning distributions

U2 - 10.1145/2213977.2214042

DO - 10.1145/2213977.2214042

M3 - Conference contribution

SN - 978-1-4503-1245-5

T3 - STOC '12

SP - 709

EP - 728

BT - Proceedings of the Forty-fourth Annual ACM Symposium on Theory of Computing

PB - ACM

CY - New York, NY, USA

ER -