J. R. Hopgood and C. Evers
In reverberant environments, a moving speaker yields a dynamically changing source-sensor geometry giving rise to a spatially-varying acoustic impulse response (AIR) between the source and sensor. Consequently, this leads to a time-varying convolutional relationship between the source signal and the observations and thus spectral colouration of the received signal. It is therefore desirable to reduce the effect of reverberation. In this paper, a model-based approach is proposed for single-channel blind dereverberation of speech from a moving speaker acquired in an acoustic environment. The sound source is modelled by a block-based time-varying AR (TVAR) process, and the channel by a linear time-varying all-pole filter. In each case, the AR parameters are represented as a linear combination of known basis functions with unknown weightings. The speech model captures local nonstationarity while taking account of the global nonstationary characteristics inherent in long segments of speech. As an initial step towards single-channel blind dereverberation of real speech signals, this paper presents simulation results for synthetic data to demonstrate the algorithm developed.