This correspondence presents novel sequential and parallel learning techniques for codebook design in vector quantizers using neural network approaches. These techniques are used in the training phase of the vector quantizer design. Our learning techniques combine the split-and-cluster methodology of the traditional vector quantizer design with neural learning, and lead to better quantizer design (with fewer distortions). Our sequential learning approach overcomes the code word underutilization problem of the competitive learning network. As a result, this network only requires partial or zero updating, as opposed to full neighbor updating as needed in the selforganizing feature map. The parallel learning network, while satisfying the above characteristics, also leads to parallel learing of the codewords. The parallel learning technique can be used for faster codebook design in a multiprocessor environment. It is shown that this sequential learning scheme can sometimes outperform the traditional LBG algorithm, while the parallel learning scheme performs very close to the LGB and the sequential learning algorithms.
- Neural networks. parallel learning squential learning
- signal compression vector quantization