Domain Adaptation

Domain adaptation techniques address the problem of reducing the sensitivity of machine learning methods to the so-called domain shift, namely the difference between source (training) and target (test) data distributions. This is of great importance in practical applications, where a trained model has to be deployed in an environment which can be different from the training one and labeled data can be  hard or impossible to gather.  

Domain Adaptation



Unsupervised Domain Adaptation

The problem setting of domain adaptation can be defined in two different ways:

  1. a few labeled target samples are available during training (semi-supervised domain adaptation) or
  2. no target label is provided during training (unsupervised domain adaption).

At PAVIS, our research is focused on the second setting. We have developed different methods that currently constitute the state of the art for different benchmarks (cross-dataset digit recognition, modality adaptation, etc.). Following the modern trend, the algorithm we are working on (and the ones we already released) are based on neural networks. 


Geometric methods

A common trend in domain adaptation leverage the aligment of source and target feature distributions. More in detail some methods perform geometric alignment of second-order statistics of such distributions for DNN's hidden layers. We explore proper Geodesic aligment of covariances on a Riemannian manifold. Such alignment induces Entropy minimizazion on the target set, which proves to be an efficient validation tool for the prickly problem of validation in unsupervised domain adaption [1]. 

Distribution alignment




Adversarial training

Generative adversarial networks and more in general adversarial training have proven to be very effective tools to tackle the domain adaptation problem. Indeed, one can use them to force source and target distributions to be similar and state-of-the art methods are based on such techniques. We push our research toward a new adversarial training procedure based on a feature augmentation approach to boost the performance of previous methods [2].




[1] P. Morerio, J. Cavazza, V. Murino
"Minimal-Entropy Correlation Alignment for Unsupervised Deep Domain Adaptation"
International Conference on Learning Representations (ICLR), 2018 
[PDF] [POSTER] [GITHUB project page]

[2] R. Volpi, P. Morerio, S. Savarese, V. Murino
"Adversarial Feature Augmentation for Unsupervised Domain Adaptation"
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018
[PDF] [GITHUB project page]