Scalable Multi-Neighborhood Learning For Convolutional Networks
Elnaz Barshan, Paul Fieguth, Alexander Wong

Abstract:
In this paper we explore the role of scale for improved feature learning in convolutional networks. We propose multi-neighborhood convolutional networks, designed to learn image features at different levels of detail. Utilizing nonlinear scale-space models, the proposed multi-neighborhood model can effectively capture fine-scale image characteristics (i.e., appearance) using a small-size neighborhood, while coarse-scale image structures (i.e., shape) are detected through a larger neighborhood. In addition, we introduce a scalable learning method for the proposed multi-neighborhood architecture and show how one can use an already-trained single-scale network to extract image features at multiple levels of detail. The experimental results demonstrate the superior performance of the proposed multi-scale multi-neighborhood models over their single-scale counterparts without an increase in training cost.