Abstract:
The discriminative mixture variational autoencoder (DMVAE) is a deep probability model for semisupervised learning feature extraction. Encoding, decoding, and classification modules comprise the DMVAE. The encoder projects the observation to the latent space in the encoding module, and the latent representation is fed to the decoding module, which depicts the generative process from the hidden variable to data. Our DMVAE decoding module partitions the observed dataset into clusters using multiple decoders whose number is automatically determined by the Dirichlet process (DP) and learns a probability distribution for each cluster. The DMVAE can more accurately describe observations than the standard variational autoencoder (VAE) that describes all data with a single probability function, improving the characterization ability of extracted features, especially for complex distribution data. To obtain a discriminative latent space, class labels of labeled data are introduced to restrict feature learning via a softmax classifier, ensuring the minimum entropy of the predicted labels for features from unlabeled data. Finally, the joint optimization of marginal likelihood, label, and entropy constraints improves DMVAE performance by increasing classification confidence for unlabeled data and accurately classifying labeled data. Our DMVAE-based semisupervised classification outperforms other methods on benchmark datasets and the measured radar echo dataset.
Note: Please discuss with our team before submitting this abstract to the college. This Abstract or Synopsis varies based on student project requirements.
Did you like this final year project?
To download this project Code with thesis report and project training... Click Here