In:
Neural Computation, MIT Press, Vol. 28, No. 1 ( 2016-01), p. 45-70
Abstract:
We argue that when faced with big data sets, learning and inference algorithms should compute updates using only subsets of data items. We introduce algorithms that use sequential hypothesis tests to adaptively select such a subset of data points. The statistical properties of this subsampling process can be used to control the efficiency and accuracy of learning or inference. In the context of learning by optimization, we test for the probability that the update direction is no more than 90 degrees in the wrong direction. In the context of posterior inference using Markov chain Monte Carlo, we test for the probability that our decision to accept or reject a sample is wrong. We experimentally evaluate our algorithms on a number of models and data sets.
Type of Medium:
Online Resource
ISSN:
0899-7667
,
1530-888X
DOI:
10.1162/NECO_a_00796
Language:
English
Publisher:
MIT Press
Publication Date:
2016
detail.hit.zdb_id:
1025692-1
detail.hit.zdb_id:
1498403-9
Bookmarklink