Abstract:
Annotating chest X-rays saves time and effort. Recent weakly-supervised algorithms like multi-instance learning (MIL) and class activation maps (CAM) attempted this task, but they often produce inaccurate or incomplete regions. One reason is the neglect of pathological implications hidden in the relationship across anatomical regions within each image and across images. This paper argues that contextual and compensating information from the cross-region and cross-image relationship is essential for more consistent and integral regions. The Graph Regularized Embedding Network (GREN) uses intra-image and inter-image information to locate diseases on chest X-ray images to model the relationship. GREN segments the lung lobes using a pre-trained U-Net and models their intra-image relationship using an intra-image graph to compare regions. Inter-image graphs compare in-batch images. This process mimics radiologist training and diagnosis by comparing multiple regions and images. We compute graphs using Hash coding and Hamming distance as regularizers to retain structural information in the neural network’s deep embedding layers for localization. Our approach achieves state-of-the-art weakly-supervised disease localization on NIH chest X-ray dataset. Codes are online.
Note: Please discuss with our team before submitting this abstract to the college. This Abstract or Synopsis varies based on student project requirements.
Did you like this final year project?
To download this project Code with thesis report and project training... Click Here