Sunday, June 25, 2017


We focus on semantic segmentation, where class-based image segmentation is of focus as the task of labeling pixels with several pre-defined object classes or background in an image. Distinct from the image driven segmentation task, class based image segmentation aims to not only identify the object classes of interest, but also determine the shapes or boundaries of these objects. It in fact involves resolving two of the most fundamental problems in vision research: recognition and segmentation. Therefore, it plays an essential role in many high-level computer vision applications, such as image and scene understanding.


Class-based Image Segmentation

  • A concise annotation method for collecting training data for class based image segmentation. Two steps
    • Generate multiple tight segments, combining the multiple segment method with the concept of bounding box prior
    • Select the best segment by semi-supervised regression
  • Credits
    • Present a novel algorithm which integrates the bounding box prior into the concept of multiple image segmentation, and automatically generate multiple tight segments
    • Case the segment selection as a semi-supervised regression problem
    • Demonstrate that our approach provides an effective alternative for manually labeled contours


Core Techniques

  • Multiple Tight Segment Generation: Present an algorithm that automatically generates a set of tight segments for the bounding box of an object, and at least one of these tight segments would approach the object segment
  • Segment Selection: Given a few contours as well as a set of bounding boxes of an object class, we illustrate how to infer the object segments of these bounding boxes by solving a semi-supervised regression problem




Knowledge Leverage from Contours to Bounding Boxes: A Concise Approach to Annotation

Jie-Zhi Cheng, Feng-Ju Chang, Kuang-Jui Hsu, and Yen-Yu Lin

Asian Conference on Computer Vision (ACCV), Lecture Notes in Computer Science, November 2012, Poster presentation (Acceptance rate: 23.2%)