Acoustic Scene Analysis Based On Latent Acoustic Topic And Event Allocation
Keisuke Imoto, Yasunori Ohishi, Hisashi Uematsu Hitoshi Ohmuro

Abstract:
We propose a model for analyzing acoustic scenes by using long-term (more than several seconds) acoustic signals based on a probabilistic generative model of an acoustic feature sequence associated with acoustic scenes (e.g. "cooking") and acoustic events (e.g. "cutting with a knife," "heating a skillet" or "running water") called latent acoustic topic and event allocation (LATEA) model.

The proposed model allows the analysis of a wide variety of sounds and the capture of abstract acoustic scenes by representing acoustic events and scenes as latent variables, and can also describe the acoustic similarity and variance between acoustic events by representing their features as a mixture of Gaussian components.

Experiments with real-life sounds indicated that the proposed model exhibited lower perplexity than conventional models; it improved the stability of acoustic scene estimation.
The experimental results also suggested that the proposed model can better describe the acoustic similarity and variance between acoustic events than conventional models.