Abstract:
Unsupervised domain adaptation (UDA) uses a related but different distribution labeled source domain to learn a classifier for an unlabeled target domain. Most methods learn domain-invariant features by adapting image data. However, forcing domain-specific adaptation weakens learned features. The deep ladder-suppression network (DLSN) is a novel, elegant module that suppresses domain-specific variations to better learn cross-domain shared content. Our DLSN is an autoencoder with lateral encoder-decoder connections. This design directly feeds the decoder the domain-specific details needed to reconstruct the unlabeled target data, relieving the shared encoder of the burden of learning them. Thus, DLSN lets the shared encoder ignore domain-specific variations and learn cross-domain shared content. The proposed DLSN can be integrated with existing UDA frameworks as a standard module to improve performance. Without bells and whistles, extensive experimental results on four gold-standard domain adaptation datasets—Digits, Office31, Office-Home, and VisDA-C—show that the proposed DLSN can consistently and significantly improve the performance of various popular UDA frameworks.
Note: Please discuss with our team before submitting this abstract to the college. This Abstract or Synopsis varies based on student project requirements.
Did you like this final year project?
To download this project Code with thesis report and project training... Click Here