Abstract:
In various domains and settings, model-based approaches have produced effective recommendation lists. They can accurately recommend items using latent factors. Even if the model embeds content-based information, we lose references to the recommended item’s semantics in latent space. It complicates recommendation interpretation. We use semantic features from knowledge graphs to initialize latent factors in Factorization Machines and train an interpretable model that can make accurate recommendations. To maintain the dataset’s informativeness, semantic features are injected into the learning process. We propose two metrics to assess knowledge-aware interpretability’s semantic accuracy and robustness using the original knowledge graph. An extensive experimental evaluation on six datasets shows the interpretable model’s accuracy, diversity, and robustness in recommendation results.
Note: Please discuss with our team before submitting this abstract to the college. This Abstract or Synopsis varies based on student project requirements.
Did you like this final year project?
To download this project Code with thesis report and project training... Click Here