Saturday, August 19, 2017

 

The cost of data labeling for image recognition or classification is often expensive. To reduce the labeling effort, transfer learning has been demonstrated to be a promising technique for object recognition with few training samples. It delivers useful knowledge in the source to improve the target model learning. We particularly focus on transferring knowledge from multiple classes to multiple classes, given two multi-class recognition tasks (one in the source domain and the other in the target domain), leveraging the extra source knowledge to learn a more robust multi-class classifier rather than a set of binary classifiers in the target domain.

 

Introduction

  • Multi-class object recognition with few labeled data
    • Goal: Learn a target classifier with low generalization errors
    • Difficulty: When only few labeled data are available, over-fitting occurs. That is, the yielded classifier has poor generalization.
  • What and how prior knowledge help to learn a robust classifier without labeling new data?
    • Source task: many existed labeled data
    • Target task: few labeled data
  • Motivations: to leverage the extra source knowledge, together with the target knowledge in a common domain, and consequently learn a more robust multi-class classifier
  • Conventional transfer learning algorithms: lack multi-class formulation

 

Core Ideas

  • Attribute transfer
  • Multi-classes (source) to multi-classes (target) knowledge transfer
  • What to transfer: a sequence of learnable, discriminant attributes
    • Commonly shared by the source and target domains
    • Converted two multi-class classification tasks to related, binary ones
  • How to transfer: Two-layer multi-task variant of AdaBoost.OC
    • Boosting algorithm with error-correcting output codes (ECOC)
      • Better generalization
    • Outer layer: Attribute partition discovery
      • Discriminant:Multi-class formulation
      • Learnable:Without human effort
      • Complementary:Iterative error minimization
    • Inner layer: Attribute classifier learning
      • Employ classifier sharing principle
      • Support multiple kernel learning: Combining various low-level features

  • Our goal: Attributes should be learnable

 

Publications

 

Cross-Database Transfer Learning via Learnable and Discriminant Error-correcting Output Codes

Feng-Ju Chang, Yen-Yu Lin, and Ming-Fang Weng


Asian Conference on Computer Vision (ACCV), Lecture Notes in Computer Science, November 2012, (Oral presentation; Acceptance rate 3.6% (31/869))


Paper