Hardness Results for Agnostically Learning Low-Degree Polynomial Threshold Functions

Ryan O'Donnell, Yi Wu, Ilias Diakonikolas, Rocco A. Servedio

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

Hardness results for maximum agreement problems have close connections to hardness results for proper learning in computational learning theory. In this paper we prove two hardness results for the problem of finding a low degree polynomial threshold function (PTF) which has the maximum possible agreement with a given set of labeled examples in ℝn × {– 1, 1}. We prove that for any constants d ≥ 1, ∊ > 0,

• Assuming the Unique Games Conjecture, no polynomial-time algorithm can fnd a degree-d PTF that is consistent with a (1/2 + ∊) fraction of a given set of labeled examples in ℝn × {–1, 1}, even if there exists a degree-d PTF that is consistent with a 1 − ∊ fraction of the examples.

• It is NP-hard to fnd a degree-2 PTF that is consistent with a (1/2 + ∊) fraction of a given set of labeled examples in ℝn × {– 1, 1}, even if there exists a half-space (degree-1 PTF) that is consistent with a 1 − ∊ fraction of the examples.

These results immediately imply the following hardness of learning results: (i) Assuming the Unique Games Conjecture, there is no better-than-trivial proper learning algorithm that agnostically learns degree-d PTFs under arbitrary distributions; (ii) There is no better-than-trivial learning algorithm that outputs degree-2 PTFs and agnostically learns halfspaces (i.e. degree-1 PTFs) under arbitrary distributions.

Original languageEnglish
Title of host publicationProceedings of the Twenty-Second Annual ACM-SIAM Symposium on Discrete Algorithms
Pages1590-1606
Number of pages17
ISBN (Electronic)978-1-61197-308-2
DOIs
Publication statusPublished - 2011

Fingerprint

Dive into the research topics of 'Hardness Results for Agnostically Learning Low-Degree Polynomial Threshold Functions'. Together they form a unique fingerprint.

Cite this