Experiments In Combining Boosting And Deep Stacked Networks
Manuel Montoya-Catalá, University Carlos III of Madrid
Ricardo Fernando Alvear-Sandoval, University Carlos III of Madrid
Aníbal Ramón Figueiras-Vidal, University Carlos III of Madrid

Abstract:
Both boosting and deep stacking sequentially train their units taking into account the outputs of the previously trained learners. This parallelism suggests that it exists the possibility of getting some advantages by combining these techniques, i.e., emphasis and injection, in appropiate manners. In this paper, we propose a first mode for such a combination by simultaneously applying a general and flexible enough emphasis function and injecting the aggregated previous outputs to the learner which is being designed. We call this kind of classification mechanism Boosted and Aggregated Deep Stacked Networks (B-ADSNs). A series of experiments with some selected benchmark databases reveal that, if carefully designed, B-ADSNs never perform worse than the ADSNs (DSNs which work with aggregated output injection), and that in some cases their performance is better. We analyze and discuss the conditions to get these favourable results, and, finally, we explain that there are other combination possibilities that merit to be studied.