Abstract
An important question in information processing is the extent to which neural firing patterns remain consistent while processing representations. Transient changes in representational consistency can provide clues to the dynamics of neural processing. We present a generalized framework for measuring the consistency of a neuronal representation that does not require explicit knowledge of the parameters encoded by the ensemble. It requires only neuronal ensembles and a training set of neuronal activity that samples behavioral parameters equally. This will be useful in structures where the behavioral parameters signalled by the neural activity are controversial or unknown.
Original language | English (US) |
---|---|
Pages (from-to) | 91-99 |
Number of pages | 9 |
Journal | Neurocomputing |
Volume | 58-60 |
DOIs | |
State | Published - Jun 2004 |
Bibliographical note
Funding Information:We thank N.C. Schmitzer-Torbert for generally helpful dialogues. This work was supported by NIH Grant MH68029-01. JCJ was also partly supported by NSF-IGERT training Grant #9870633.
Keywords
- Distributed representation
- Kernel density estimation (KDE)
- Neural ensemble
- Reconstruction