Novel Robotic Sound Localization And Separation Using Non-Causal Filtering And Bayesian Fusion
Fakheredine Keyrouz, Notre Dame University

Binaural localization and separation of two concurrent sound sources using a humanoid head is a challenging task, especially if the localization is done in the three dimensions using only the two ears of the humanoid. From a biological perspective, localization of sound sources, and the perceptual separation of sound mixtures by the human hearing organ are two closely interrelated processes. From a signal processing perspective, both localization and separation may be treated as two separate problems. This paper presents a novel method to localize and separate the sound sources, which are situated in high reverberation areas through the adaptation of a Multiple-Input-Multiple-Output system (MIMO) with a Bayesian network. It is shown that, taking account of connections between blind system identification and blind source separation, the sound sources are not only efficiently separated, they can also be accurately localized in the 3D space surrounding the humanoid head. This artificial head is equipped with two inner microphones and two outer microphones. The presented algorithm deploys a non-causal filter system using the known neural networks based principle of information propagation in combination with the machine learning based technique of Bayesian networks. Simulation results validated with experimental measurements showed that this method is outperforming in terms of localization accuracy and in solving the well-known front-back confusion problem.