In:
Neural Computation, MIT Press, Vol. 31, No. 8 ( 2019-08), p. 1718-1750
Abstract:
In this letter, we propose a variable selection method for general nonparametric kernel-based estimation. The proposed method consists of two-stage estimation: (1) construct a consistent estimator of the target function, and (2) approximate the estimator using a few variables by [Formula: see text]-type penalized estimation. We see that the proposed method can be applied to various kernel nonparametric estimation such as kernel ridge regression, kernel-based density, and density-ratio estimation. We prove that the proposed method has the property of variable selection consistency when the power series kernel is used. Here, the power series kernel is a certain class of kernels containing polynomial and exponential kernels. This result is regarded as an extension of the variable selection consistency for the nonnegative garrote (NNG), a special case of the adaptive Lasso, to the kernel-based estimators. Several experiments, including simulation studies and real data applications, show the effectiveness of the proposed method.
Type of Medium:
Online Resource
ISSN:
0899-7667
,
1530-888X
DOI:
10.1162/neco_a_01212
Language:
English
Publisher:
MIT Press
Publication Date:
2019
detail.hit.zdb_id:
1025692-1
detail.hit.zdb_id:
1498403-9
Bookmarklink