About Christine Evers

This author has not yet filled in any details.
So far Christine Evers has created 42 blog entries.

“Direction of Arrival Estimation in the Spherical Harmonic Domain Using Subspace Pseudointensity Vectors”

By |2018-05-30T18:37:28+00:00September 26th, 2016|

IEEE Xplore:

IEEE/ACM Transactions on Audio, Speech, and Language Processing

Authors:

Alastair H. Moore, Christine Evers, and Patrick A. Naylor

Abstract:

Direction of arrival (DOA) estimation is a fundamental problem in acoustic signal processing. It is used in a diverse range of applications, including spatial filtering, speech dereverberation, source separation and diarization. Intensity vector-based DOA estimation is attractive, especially for spherical sensor arrays, because it is computationally efficient. Two such methods are presented that operate on a spherical harmonic decomposition of a sound field observed using a spherical microphone array. The first uses pseudointensity vectors (PIVs) and works well in acoustic environments where only one sound source is active at any time. The second uses subspace pseudointensity vectors (SSPIVs) and is targeted at environments where multiple simultaneous soures and significant levels of reverberation make the problem more challenging. Analytical models are used to quantify the effects of an interfering source, diffuse noise, and sensor noise on PIVs and SSPIVs. The accuracy of DOA estimation using PIVs and SSPIVs is compared against the state of the art in simulations including realistic reverberation and noise for single and multiple, stationary and moving sources. Finally, robust performance of the proposed methods is demonstrated by using speech recordings in a real acoustic environment.

“Acoustic simultaneous localization and mapping (a-SLAM) of a moving microphone array and its surrounding speakers”

By |2018-01-16T21:45:31+00:00May 19th, 2016|

IEEEXplore Access:

in Proc. IEEE Intl. Conf. on Acoustics, Speech and Signal Processing (ICASSP), Shanghai, China, Mar. 2016

Authors:

C. Evers , A. H. Moore, and P. A. Naylor

Abstract:

Acoustic scene mapping creates a representation of positions of audio sources such as talkers within the surrounding environment of a microphone array. By allowing the array to move, the acoustic scene can be explored in order to improve the map. Furthermore, the spatial diversity of the kinematic array allows for estimation of the source-sensor distance in scenarios where source directions of arrival are measured. As sound source localization is performed relative to the array position, mapping of acoustic sources requires knowledge of the absolute position of the microphone array in the room. If the array is moving, its absolute position is unknown in practice. Hence, Simultaneous Localization and Mapping (SLAM) is required in order to localize the microphone array position and map the surrounding sound sources. In realistic environments, microphone arrays receive a convolutive mixture of direct-path speech signals, noise and reflections due to reverberation. A key challenge of Acoustic SLAM (a-SLAM) is robustness against reverberant clutter measurements and missing source detections. This paper proposes a novel bearing-only a-SLAM approach using a Single-Cluster Probability Hypothesis Density filter. Results demonstrate convergence to accurate estimates of the array trajectory and source positions.

”Towards informative path planning for acoustic SLAM”

By |2018-01-16T21:41:11+00:00March 1st, 2016|

Access:

in Proc. DAGA, Aachen, Germany, Mar. 2016

Authors:

C. Evers , A. H. Moore, and P. A. Naylor

Abstract:

Acoustic scene mapping is a challenging task as micro- phone arrays can often localize sound sources only in terms of their directions. Spatial diversity can be ex- ploited constructively to infer source-sensor range when using microphone arrays installed on moving platforms, such as robots. As the absolute location of a moving ro- bot is often unknown in practice, Acoustic Simultaneous Localization And Mapping (a-SLAM) is required in or- der to localize the moving robot’s positions and jointly map the sound sources. Using a novel a-SLAM approach, this paper investigates the impact of the choice of robot paths on source mapping accuracy. Simulation results de- monstrate that a-SLAM performance can be improved by informatively planning robot paths.

“Direction of arrival estimation using pseudo-intensity vectors with direct-path dominance test”

By |2018-01-16T21:29:18+00:00December 28th, 2015|

IEEEXplore Access:

in Proc. European Signal Processing Conference (EUSIPCO), Nice, August 2015

Authors:

A. H. Moore, C. Evers , and P. A. Naylor

Abstract:

The accuracy of direction of arrival estimation tends to degrade under reverberant conditions due to the presence of reflected signal components which are correlated with the direct path. The recently proposed direct-path dominance test provides a means of identifying time-frequency regions in which a single signal path is dominant. By analysing only these regions it was shown that the accuracy of the FS-MUSIC algorithm could be significantly improved. However, for real-time implementation a less computationally demanding localisation algorithm would be preferable. In the present contribution we investigate the direct-path dominance test as a preprocessing step to pseudo-intensity vector-based localisation. A novel formulation of the pseudo-intensity vector is proposed which further exploits the direct path dominance test and leads to improved localisation performance.

“Bearing-only acoustic tracking of moving speakers for robot audition”

By |2018-01-16T21:37:33+00:00September 10th, 2015|

IEEEXplore Access:

in Proc. IEEE Intl. Conf. Digital Signal Processing (DSP), Singapore, July 2015

Authors:

C. Evers , J. Sheaffer, A. H. Moore, B. Rafaely, and P. A. Naylor

Abstract:

This paper focuses on speaker tracking in robot audition for human-robot interaction. Using only acoustic signals, speaker tracking in enclosed spaces is subject to missing detections and spurious clutter measurements due to speech inactivity, reverberation and interference. Furthermore, many acoustic localization approaches estimate speaker direction, hence providing bearing-only measurements without range information. This paper presents a probability hypothesis density (PHD) tracker that augments the bearing-only speaker directions of arrival with a cloud of range hypotheses at speaker initiation and propagates the random variates through time. Furthermore, due to their formulation PHD filters explicitly model, and hence provide robustness against, clutter and missing detections. The approach is verified using experimental results.

“Multichannel equalisation for high-order spherical microphone arrays using beamformed channels”

By |2018-01-16T21:32:46+00:00September 10th, 2015|

IEEEXplore Access:

in Proc. IEEE Intl. Conf. Digital Signal Processing (DSP), Singapore, July 2015

Authors:

A. H. Moore, C. Evers , and P. A. Naylor

Abstract:

High-order spherical microphone arrays offer many practical benefits including relatively fine spatial resolution in all directions and rotation invariant processing using eigenbeams. Spatial filtering can reduce interference from noise and reverberation but in even moderately reverberant environments the beam pattern fails to suppress reverberation to a level adequate for typical applications. In this paper we investigate the feasibility of applying dereverberation by considering multiple beamformer outputs as channels to be dereverberated. In one realisation we process directly in the spherical harmonic domain where the beampatterns are mutually orthogonal. In a second realisation, which is not limited to spherical microphone arrays, beams are pointed in the direction of dominant reflections. Simulations demonstrate that in both cases reverberation is significantly reduced and, in the best case, clarity index is improved by 15 dB.

EUSIPCO 2015

By |2018-01-16T22:11:07+00:00September 6th, 2015|

Back from a week in Nice where I presented the Tutorial on Embodied Audition for Robots with Dr Heinrich Löllmann, FAU, and Prof Radu Horaud, INRIA, as well as our paper in [2]. If you would like to have a look at the slides again or you missed the tutorial you’ll be able to find the complete set on the EARS website in the link below.

[1] H. Löllmann, C. Evers, and R. Horaud, “Embodied audition for robots,” Tutorial presented at European Signal Processing Conference (EUSIPCO), Nice, France, September 2015

[2]  A. H. Moore, C. Evers, and P. A. Naylor, “Direction of arrival estimation using pseudo-intensity vectors with direct-path dominance test,” in Proc. European Signal Processing Conference (EUSIPCO), Nice, August 2015

Matlab EARS Map Objects – Now publicly available

By |2018-01-16T22:11:18+00:00September 1st, 2015|

With the presentation of our EARS tutorial on Embodied Audition for Robots, I have now publicly released the first version of the Matlab EARS Map objects.

The EARS map objects are Matlab classes designed to store and visualise data for acoustic scene mapping. EARS map objects allow the storage of a) individual speakers at one time step using a mapFeature object, b) a collection of speakers at one time step using a map object, and c) a trajectory of the evolution of a map objects over time using a mapFeature object. The objects are designed to contain data from both sound source localisation (SSL) as well as speaker tracking algorithms to provide a complete representation of the acoustic scene.

The MATLAB code can be found at: https://github.com/cevers/ears_map_objects

IEEE DSP 2015

By |2018-01-16T22:11:26+00:00July 25th, 2015|

Greetings from IEEE DSP in Singapore where I presented the following two papers in the Special Session on “Recent Advances in Acoustic Signal Processing”. I am particularly excited about [1] as it is the result of an ongoing collaboration with Dr Jonathan Sheaffer and Prof Boaz Rafaely at the Acoustics Lab of Ben-Gurion University of the Negev and evolved from the EARS project.

[1] C. Evers, J. Sheaffer, A. H. Moore, B. Rafaely, and P. A. Naylor, “Bearing-only acoustic tracking of moving speakers for robot audition,” in Proc. IEEE Intl. Conf. Digital Signal Processing (DSP), Singapore, July 2015

[2]  A. H. Moore, C. Evers, and P. A. Naylor, “Multichannel equalisation for high-order spherical microphone arrays using beamformed channels,” in Proc. IEEE Intl. Conf. Digital Signal Processing (DSP), Singapore, July 2015

Until the proceedings are available on IEEE Xplore, you can find our papers on the SAP Website.

 

2015-07-21 11.49.40

This website uses cookies and third party services. Ok