In:
Neural Computation, MIT Press, Vol. 15, No. 9 ( 2003-09-01), p. 2147-2177
Abstract:
Temporal slowness is a learning principle that allows learning of invariant representations by extracting slowly varying features from quickly varying input signals. Slow feature analysis (SFA) is an efficient algorithm based on this principle and has been applied to the learning of translation, scale, and other invariances in a simple model of the visual system. Here, a theoretical analysis of the optimization problem solved by SFA is presented, which provides a deeper understanding of the simulation results obtained in previous studies.
Type of Medium:
Online Resource
ISSN:
0899-7667
,
1530-888X
DOI:
10.1162/089976603322297331
Language:
English
Publisher:
MIT Press
Publication Date:
2003
detail.hit.zdb_id:
1025692-1
detail.hit.zdb_id:
1498403-9