Abstract:
The curse of kernelization affects most kernel-based models, including SVM, despite their strong generalization ability. Hyperparameter tuning, which requires computational resources, affects their predictive performance. Kernel methods have these issues with large datasets. We first formulate the optimization problem in a kernel-based learning setting as a posterior inference problem, then develop a rich family of Recurrent Neural Network-based variational inference techniques. We use Stein Variational Gradient Descent to refine the variational distribution to the true posterior distribution. We get a robust, efficient variational learning method for multiclass kernel machines with extremely accurate approximation. Our formulation efficiently learns kernel parameters and hyperparameters, making the proposed method robust to data uncertainties. Our method outperforms other baselines and LIBSVM, a well-known SVM implementation, on modest datasets without tuning any parameters.
Note: Please discuss with our team before submitting this abstract to the college. This Abstract or Synopsis varies based on student project requirements.
Did you like this final year project?
To download this project Code with thesis report and project training... Click Here