Abstract:
Hashing methods have proven effective and efficient for large-scale Web media search in recent years. However, general hashing methods lack discrimination for fine-grained objects with similar appearances but subtle differences.
We first apply the attention mechanism to fine-grained hashing code learning to solve this problem. Deep saliency hashing (DSaH), a new deep hashing model, automatically mines salient regions and learns semantic-preserving hashing codes.
Two-step end-to-end model DSaH uses an attention network and a hashing network. Semantic, saliency, and quantization losses comprise our loss function. The saliency loss directs the attention network to find discriminative regions in pairs of images.
We extensively test performance on fine-grained and general retrieval datasets. Our DSaH performs best for fine-grained retrieval on Oxford Flowers, Stanford Dogs, and CUB Birds, beating the strongest competitor (DTQ) by about 10% on both datasets. DSaH matches several cutting-edge hashing methods on CIFAR-10 and NUS-WIDE.
Note: Please discuss with our team before submitting this abstract to the college. This Abstract or Synopsis varies based on student project requirements.
Did you like this final year project?
To download this project Code with thesis report and project training... Click Here