Training data selection for optimal generalization with noise variance reduction in neural networks

Sethu Vijayakumar, Masashi Sugiyama, Hidemitsu Ogawa

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In this paper, we discuss the problem of active training data selection in the presence of noise. We formalize the learning problem in neural networks as an inverse problem using a functional analytic framework and use the Averaged Projection criterion as our optimization criterion for learning. Based on the above framework, we look at training data selection from two objectives, namely, improving the generalization ability and secondly, reducing the noise variance in order to achieve better learning results. The final result uses the apriori correlation information on noise characteristics and the original function ensemble to devise an efficient sampling scheme, which can be used in conjunction with the incremental learning schemes devised in our earlier work to achieve optimal generalization.
Original languageEnglish
Title of host publicationNeural Nets WIRN Vietri-98
PublisherSpringer-Verlag GmbH
Pages153-166
Number of pages14
Publication statusPublished - 1998

Fingerprint

Dive into the research topics of 'Training data selection for optimal generalization with noise variance reduction in neural networks'. Together they form a unique fingerprint.

Cite this