Skip to content

“Articulatory based speech models for blind speech dereverberation using sequential Monte Carlo methods”

IEEEXplore Access:

in Proc. European Signal Processing Conf. (EUSIPCO), August 2010

Authors:

C. Evers and J. R. Hopgood

Abstract:

Room reverberation leads to reduced intelligibility of audio signals. Enhancement is thus crucial for high-quality audio and scene analysis applications. This paper proposes to directly and optimally estimate the source signal and acoustic channel from the distorted observations. The remaining model parameters are sampled from a particle filter, facilitating real-time dereverberation. The approach was previously successfully applied to single- and multisensor blind dereverberation. Enhancement can be improved upon by accurately modelling the speech production system. This paper therefore extends the blind dereverberation approach to incorporate a novel source model based on parallel formant synthesis and compares the approach to one using a time-varying AR model, with parameters varying according to a random walk. Experimental data shows that dereverberation using the proposed model is improved for vowels, stop consonants, and fricatives.

Published inConferencesDereverberationPublications

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *