With the recent progress in kernel based learning methods, computation with Gram matrices has received immense attention. However, the complexity of computing the entire Gram matrix is quadratic in terms of number of samples. Therefore, a considerable amount of work has been focused on extracting relevant information from the Gram matrix without accessing all the elements. Most of these methods exploits the positive definiteness and rapidly decaying eigenstructure of the Gram matrix. Although information theoretic learning (ITL) is conceptually different from kernel based learning, several ITL estimators can be written in terms of Gram matrices. However, the difference between ITL and kernel based methods is that a few ITL estimators include a special type of matrix which is neither positive definite nor symmetric. In this paper we discuss how the techniques applied in kernel based learning can be applied to reduce computational complexity of the ITL estimators involving both Gram matrices and these other matrices.
|Title of host publication||Neural Networks, 2009. IJCNN 2009. International Joint Conference on|
|Publisher||Institute of Electrical and Electronics Engineers (IEEE)|
|Number of pages||5|
|Publication status||Published - 14 Jun 2009|