“Source tracking using moving microphone arrays for robot audition”

By |2018-01-16T22:10:07+00:00June 19th, 2017|

IEEE Xplore Access:

Proc. IEEE Intl. Conf. on Acoustics, Speech and Signal Processing (ICASSP)


C. Evers, Y. Dorfan, S. Gannot, and P. A. Naylor


Intuitive spoken dialogues are a prerequisite for human-robot interaction. In many practical situations, robots must be able to identify and focus on sources of interest in the presence of interfering speakers. Techniques such as spatial filtering and blind source separation are therefore often used, but rely on accurate knowledge of the source location. In practice, sound emitted in enclosed environments is subject to reverberation and noise. Hence, sound source localization must be robust to both diffuse noise due to late reverberation, as well as spurious detections due to early reflections. For improved robustness against reverberation, this paper proposes a novel approach for sound source tracking that constructively exploits the spatial diversity of a microphone array installed in a moving robot. In previous work, we developed speaker localization approaches using expectation-maximization (EM) approaches and using Bayesian approaches. In this paper we propose to combine the EM and Bayesian approach in one framework for improved robustness against reverberation and noise.

Leave A Comment

This website uses cookies and third party services. Ok