Combining neural networks and the wavelet transform for image compression

Research output: Chapter in Book/Report/Conference proceedingConference contribution

19 Scopus citations

Abstract

This paper presents a new image compression scheme which uses the wavelet transform and neural networks. Image compression is performed in three steps. First, the image is decomposed at different scales, using the wavelet transform, to obtain an orthogonal wavelet representation of the image. Second, the wavelet coefficients are divided into vectors, which are projected onto a subspace using a neural network. The number of coefficients required to represent the vector in the subspace is less than the number of coefficients required to represent the original vector, resulting in data compression. Finally, the coefficients which project the vectors of wavelet coefficients onto the subspace are quantized and entropy coded. The advantages of various quantization schemes are discussed. Using these techniques, we obtain 32 to 1 compression at peak SNR of 29 dB for the 'lenna' image.

Original languageEnglish (US)
Title of host publicationPlenary, Special, Audio, Underwater Acoustics, VLSI, Neural Networks
PublisherPubl by IEEE
PagesI-637-I-640
ISBN (Print)0780309464
StatePublished - Jan 1 1993
Event1993 IEEE International Conference on Acoustics, Speech and Signal Processing - Minneapolis, MN, USA
Duration: Apr 27 1993Apr 30 1993

Publication series

NameProceedings - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing
Volume1
ISSN (Print)0736-7791

Other

Other1993 IEEE International Conference on Acoustics, Speech and Signal Processing
CityMinneapolis, MN, USA
Period4/27/934/30/93

Fingerprint Dive into the research topics of 'Combining neural networks and the wavelet transform for image compression'. Together they form a unique fingerprint.

Cite this