3 years ago

Forward Stagewise Additive Model for Collaborative Multiview Boosting

Biswajit Paria, Prabir Kumar Biswas, Avisek Lahiri,
Multiview assisted learning has gained significant attention in recent years in supervised learning genre. Availability of high-performance computing devices enables learning algorithms to search simultaneously over multiple views or feature spaces to obtain an optimum classification performance. This paper is a pioneering attempt of formulating a mathematical foundation for realizing a multiview aided collaborative boosting architecture for multiclass classification. Most of the present algorithms apply multiview learning heuristically without exploring the fundamental mathematical changes imposed on traditional boosting. Also, most of the algorithms are restricted to two class or view setting. Our proposed mathematical framework enables collaborative boosting across any finite-dimensional view spaces for multiclass learning. The boosting framework is based on a forward stagewise additive model, which minimizes a novel exponential loss function. We show that the exponential loss function essentially captures the difficulty of a training sample space instead of the traditional “1/0” loss. The new algorithm restricts a weak view from overlearning and thereby preventing overfitting. The model is inspired by our earlier attempt on collaborative boosting, which was devoid of mathematical justification. The proposed algorithm is shown to converge much nearer to global minimum in the exponential loss space and thus supersedes our previous algorithm. This paper also presents analytical and numerical analyses of convergence and margin bounds for multiview boosting algorithms and we show that our proposed ensemble learning manifests lower error bound and higher margin compared with our previous model. Also, the proposed model is compared with traditional boosting and recent multiview boosting algorithms. In the majority of instances, the new algorithm manifests a faster rate of convergence on training set error and also simultaneously o- fers better generalization performance. The kappa-error diagram analysis reveals the robustness of the proposed boosting framework to labeling noise.
You might also like
Discover & Discuss Important Research

Keeping up-to-date with research can feel impossible, with papers being published faster than you'll ever be able to read them. That's where Researcher comes in: we're simplifying discovery and making important discussions happen. With over 19,000 sources, including peer-reviewed journals, preprints, blogs, universities, podcasts and Live events across 10 research areas, you'll never miss what's important to you. It's like social media, but better. Oh, and we should mention - it's free.

  • Download from Google Play
  • Download from App Store
  • Download from AppInChina

Researcher displays publicly available abstracts and doesn’t host any full article content. If the content is open access, we will direct clicks from the abstracts to the publisher website and display the PDF copy on our platform. Clicks to view the full text will be directed to the publisher website, where only users with subscriptions or access through their institution are able to view the full article.