Python Machine Learning Projects

Abstract:

Scene parsing, robot motion planning, and autonomous driving require accurate scene classification. Deep recognition models have performed well for a decade. However, these deep architectures cannot encode human visual perception, including gaze movements and cognitive processes. Using a unified deep active learning (UDAL) framework, human gaze behaviors are robustly discovered and represented in this biologically inspired deep model for scene classification. An objectness measure is used to decompose each scenery into semantically aware object patches to characterize objects with different sizes. A local–global feature fusion scheme automatically weights multimodal features to represent each region at a low level. The UDAL hierarchically represents human gaze behavior by recognizing semantically important regions in various scenes to mimic human visual perception. Importantly, UDAL combines semantically salient region detection and deep gaze shifting path (GSP) representation learning into a principled framework that requires only partial semantic tags. The sparsity penalty helps avoid contaminated/redundant low-level regional features. Finally, an image kernel machine is created from the learned deep GSP features from all scene images and fed into a kernel SVM to classify different scenes. Our approach has been competitively tested on six well-known scenery sets, including remote sensing images.

Note: Please discuss with our team before submitting this abstract to the college. This Abstract or Synopsis varies based on student project requirements.

Did you like this final year project?

To download this project Code with thesis report and project training... Click Here

You may also like: