Title
DIVA: Domain invariant variational autoencoders
Author
Ilse, M.
Tomczak, J.M.
Louizos, C.
Welling, M.
Publication year
2019
Abstract
We consider the problem of domain generalization, namely, how to learn representations given data from a set of domains that generalize to data from a previously unseen domain. We propose the domain invariant VAE (DIVA), a generative model that tackles this problem by learning three independent latent subspaces, one for the class, one for the domain and one for the object itself. In addition, we highlight that due to the generative nature of our model we can also incorporate unlabeled data from known or previously unseen domains. This property is highly desirable in fields like medical imaging where labeled data is scarce. We experimentally evaluate our model on the rotated MNIST benchmark where we show that (i) the learned subspaces are indeed complementary to each other, (ii) we improve upon recent works on this task and (iii) incorporating unlabelled data can boost the performance even further. © Deep Generative Models for Highly Structured Data, DGS@ICLR 2019 Workshop.All right reserved.
Subject
Benchmarking
Medical imaging
Auto encoders
Generative model
In-field
Labeled data
Unlabeled data
Learning systems
To reference this document use:
http://resolver.tudelft.nl/uuid:2e962d5b-9c14-4209-b717-eec1a54f3853
TNO identifier
875979
Publisher
International Conference on Learning Representations, ICLR
Source
Deep Generative Models for Highly Structured Data, DGS@ICLR 2019 Workshop, 2019 Deep Generative Models for Highly Structured Data, DGS@ICLR 2019 Workshop, 6-May-19
Document type
conference paper