Abstract:
Semantic segmentation has advanced rapidly with convolutional neural networks (CNNs). Despite their success, deep learning methods require large real-world datasets with pixel-level annotations. Many researchers use synthetic data with free annotations because pixel-level semantic labeling is laborious. Due to the domain gap, synthetic image segmentation models perform poorly on real-world datasets. Unsupervised domain adaptation (UDA) for semantic segmentation is being studied to reduce domain discrepancy. Existing methods either align features or outputs across source and target domains or solve complex image processing and post-processing issues. This paper introduces the Confidence-and-Refinement Adaptation Model (CRAM), a multi-level UDA model with CEA and SFA modules. CEA adapts locally via adversarial learning in the output space, focusing the segmentation model on high-confidence predictions. The SFA module reduces domain appearance gaps to improve model transfer in shallow feature space. The challenging UDA benchmarks “GTA5-to-Cityscapes” and “SYNTHIA-to-Cityscapes” show CRAM’s effectiveness. We match state-of-the-art performance with simplicity and convergence speed.
Note: Please discuss with our team before submitting this abstract to the college. This Abstract or Synopsis varies based on student project requirements.
Did you like this final year project?
To download this project Code with thesis report and project training... Click Here