TY - JOUR
T1 - Concurrent Cellular VLSI Adaptive Filter Architectures
AU - Parhi, Keshab K.
AU - Messerschmitt, David G.
PY - 1987/10
Y1 - 1987/10
N2 - Previous approaches to high-sampling-rate adaptive filter implementations have been based on word-level pipelined word-parallel (or “block”) realizations. In this paper, we show that adaptive filters can be implemented in an area-efficient manner by first using pipelining to the maximum possible extent, and then using block processing in combination with pipelining if further increase in sampling rate is needed. We show that, with the use of a decomposition technique, high-speed realizations can be achieved using pipelining with a logarithmic increase in hardware (the block realizations require a linear increase in hardware). We derive pipelined word-parallel realizations of high-sampling-rate adaptive lattice filters using the techniques of look-ahead computation, decomposed state update implementation, and incremental output computation. These three techniques combined make it possible to achieve asymptotically optimal complexity realizations (i.e., the same complexity asymptotically as nonrecursive systems) of high-speed adaptive lattice filters (in both bit-serial and bit-parallel methodologies) and provide a “system solution” to high-speed adaptive filtering. The adaptive lattice filter structures are ideal for high-sampling-rate implementations, since the error residuals of a particular stage are adapted order-recursively based on those of the previous stage, and the coefficient update recursion inside each stage is linear in nature. An example of a normalized stochastic gradient adaptive lattice filter is presented, and its complexity, latency, and implementation methodology tradeoffs are studied.
AB - Previous approaches to high-sampling-rate adaptive filter implementations have been based on word-level pipelined word-parallel (or “block”) realizations. In this paper, we show that adaptive filters can be implemented in an area-efficient manner by first using pipelining to the maximum possible extent, and then using block processing in combination with pipelining if further increase in sampling rate is needed. We show that, with the use of a decomposition technique, high-speed realizations can be achieved using pipelining with a logarithmic increase in hardware (the block realizations require a linear increase in hardware). We derive pipelined word-parallel realizations of high-sampling-rate adaptive lattice filters using the techniques of look-ahead computation, decomposed state update implementation, and incremental output computation. These three techniques combined make it possible to achieve asymptotically optimal complexity realizations (i.e., the same complexity asymptotically as nonrecursive systems) of high-speed adaptive lattice filters (in both bit-serial and bit-parallel methodologies) and provide a “system solution” to high-speed adaptive filtering. The adaptive lattice filter structures are ideal for high-sampling-rate implementations, since the error residuals of a particular stage are adapted order-recursively based on those of the previous stage, and the coefficient update recursion inside each stage is linear in nature. An example of a normalized stochastic gradient adaptive lattice filter is presented, and its complexity, latency, and implementation methodology tradeoffs are studied.
UR - http://www.scopus.com/inward/record.url?scp=0344124984&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=0344124984&partnerID=8YFLogxK
U2 - 10.1109/TCS.1987.1086048
DO - 10.1109/TCS.1987.1086048
M3 - Article
AN - SCOPUS:0344124984
SN - 0098-4094
VL - 34
SP - 1141
EP - 1151
JO - IEEE transactions on circuits and systems
JF - IEEE transactions on circuits and systems
IS - 10
ER -