IJCV 2021
Deep CockTail Networks: A Universal Framework for Visual Multi-source Domain Adaptation
Ziliang Chen, Pengxu Wei, Jingyu Zhuang, Guanbin Li, and Liang Lin
IJCV 2021

Abstract


Transferable deep representations for visual domain adaptation (DA) provides a route to learn from labeled source images to recognize target images without the aid of target-domain supervision. Relevant researches increasingly arouse a great amount of interest due to its potential industrial prospect for non-laborious annotation and remarkable generalization. However, DA presumes source images are identically sampled from a single source while Multi-Source DA (MSDA) is ubiquitous in the real-world. In MSDA, the domain shifts exist not only between source and target domains but also among the sources; especially, the multi-source and target domains may disagree on their semantics (e.g., category shifts). This issue challenges the existing solutions for MSDAs. In this paper, we propose Deep CockTail Network (DCTN), a universal and flexibly-deployed framework to address the problems. DCTN uses a multi-way adversarial learning pipeline to minimize the domain discrepancy between the target and each of the multiple in order to learn domain-invariant features. The derived source-specific perplexity scores measure how similar each target feature appears as a feature from one of source domains. The multi-source category classifiers are integrated with the perplexity scores to categorize target images. We accordingly derive a theoretical analysis towards DCTN, including the interpretation why DCTN can be successful without precisely crafting the source-specific hyper-parameters, and target expected loss upper bounds in terms of domain and category shifts. In our experiments, DCTNs have been evaluated on four benchmarks, whose empirical studies involve vanilla and three challenging category-shift transfer problems in MSDA, i.e., source-shift, target-shift and source-target-shift scenarios. The results thoroughly reveal that DCTN significantly boosts classification accuracies in MSDA and performs extraordinarily to resist negative transfers across different MSDA scenarios.

 

 

Framework


 

 

Experiment


 

 

 

Conclusion


In this paper, we have explored the unsupervised DA involved with multiple sources challenged by domain shift and category shift. Beside the vanilla MSDA transfer scenario, we further investigate three other innovative and realistic MSDA scenarios, where the category sets across multiple sources and the target are assumed to be inconsistent. In order to overcome these transfer challenges, we propose deep cocktail network (DCTN), an adversarial DA framework to obtain transferable and discriminative features from multiple sources to a target domain. It constitutes an alternating learning process that delicately refers to our target classification principle. DCTN can be flexibly deployed in ordinary MSDA and category shift scenarios, and more importantly, it suits the open-set scenario with a mild reconfiguration. Delving into the motivation of DCTN, we further reveal that, DCTN connects with a previous MSDA theory and enjoys an expected loss upper bound through an adversarial DA assumption instead of specifying a strong target mixture precondition. Finally, DCTN is evaluated across three benchmarks with massive transfer combinations under three scenarios. It achieves state-of-the-art results in most of our evaluation criteria and behaves extraordinarily to resist negative transfer effects.