Automatic Analysis Of Audiostreams In The Concept Drift Environment
Stavros Ntalampiras, Politecnico di Milano

Computational Auditory Scene Analysis (CASA) is typically achieved by statistical models trained offline on available data. Their performance relies heavily on the assumption that the process generating the data along with the recording conditions are stationary over time. Nowadays, there is a high demand for methodologies and tools dealing with a series of problems tightly coupled with non-stationary conditions, such as changes in the recording conditions, appearance of unknown audio classes, reverberant effects, etc. This paper unifies these obstacles under a concept drift framework and explores the passive adaptation approach. The overall aim is to learn online the statistical properties of the evolving data distribution and incorporate them into the recognition mechanism for boosting its performance. The proposed CASA system encompasses: a) an approach to discriminate abrupt and gradual concept drifts b) an online adaptation module to both kinds of drifts, and c) a mechanism which automatically updates the dictionary of the audio classes if needed. The proposed framework was evaluated in the auditory analysis of a home environment, based on a combination of professional sound event collections, while we report encouraging experimental results.