Neural network vector quantizer design using sequential and parallel learning techniques

Frank H. Wu, Keshab K. Parhi, Kalyan Ganesan

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Scopus citations

Abstract

Many techniques for quantizing large sets of input vectors into much smaller sets of output vectors have been developed. Various neural-network-based techniques for generating the input vectors via system training are studied. The variations are centered around a neural-net vector quantization (NNVQ) method which combines the well-known conventional LBG technique and the neural-net-based Kohonen technique. Sequential and parallel learning techniques for designing efficient NNVQs are given. The schemes presented require less computation time due to a new modified gain formula, partial/zero neighbor updating, and parallel learning of the code vectors. Using Gaussian-Markov source and speech signal benchmarks, it is shown that these new approaches lead to distortion as good as or better than that obtained using the LBG and Kohonen approaches.

Original languageEnglish (US)
Title of host publicationProceedings - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing
Editors Anon
PublisherPubl by IEEE
Pages637-640
Number of pages4
ISBN (Print)078030033
StatePublished - 1991
EventProceedings of the 1991 International Conference on Acoustics, Speech, and Signal Processing - ICASSP 91 - Toronto, Ont, Can
Duration: May 14 1991May 17 1991

Publication series

NameProceedings - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing
Volume1
ISSN (Print)0736-7791

Other

OtherProceedings of the 1991 International Conference on Acoustics, Speech, and Signal Processing - ICASSP 91
CityToronto, Ont, Can
Period5/14/915/17/91

Fingerprint

Dive into the research topics of 'Neural network vector quantizer design using sequential and parallel learning techniques'. Together they form a unique fingerprint.

Cite this