5 years ago

Background subtraction using the factored 3-way restricted Boltzmann machines.

Soonam Lee, Daekeun Kim

In this paper, we proposed a method for reconstructing the 3D model based on continuous sensory input. The robot can draw on extremely large data from the real world using various sensors. However, the sensory inputs are usually too noisy and high-dimensional data. It is very difficult and time consuming for robot to process using such raw data when the robot tries to construct 3D model. Hence, there needs to be a method that can extract useful information from such sensory inputs. To address this problem our method utilizes the concept of Object Semantic Hierarchy (OSH). Different from the previous work that used this hierarchy framework, we extract the motion information using the Deep Belief Network technique instead of applying classical computer vision approaches. We have trained on two large sets of random dot images (10,000) which are translated and rotated, respectively, and have successfully extracted several bases that explain the translation and rotation motion. Based on this translation and rotation bases, background subtraction have become possible using Object Semantic Hierarchy.

Publisher URL: http://arxiv.org/abs/1802.01522

DOI: arXiv:1802.01522v1

You might also like
Discover & Discuss Important Research

Keeping up-to-date with research can feel impossible, with papers being published faster than you'll ever be able to read them. That's where Researcher comes in: we're simplifying discovery and making important discussions happen. With over 19,000 sources, including peer-reviewed journals, preprints, blogs, universities, podcasts and Live events across 10 research areas, you'll never miss what's important to you. It's like social media, but better. Oh, and we should mention - it's free.

  • Download from Google Play
  • Download from App Store
  • Download from AppInChina

Researcher displays publicly available abstracts and doesn’t host any full article content. If the content is open access, we will direct clicks from the abstracts to the publisher website and display the PDF copy on our platform. Clicks to view the full text will be directed to the publisher website, where only users with subscriptions or access through their institution are able to view the full article.