Abstract:
Most urban functional area classification methods today use single-source data analysis and modeling, which cannot fully utilize easy-to-obtain multi-scale and multi-source data. Thus, this paper proposed a multi-modal machine learning-based classification model of urban functional areas by analyzing regional remote sensing images and visitor behavior data. The combination of supervised methods extracted the deep-seated features and relationships of kinds of data, filtered and merged the overall and local features of the data. The model used dual branch neural network combining SE-ResNeXt and Dual Path Network (DPN) to automatically mine and fuse the overall characteristics of multi-source data, used designed feature engineering to deeply mine user behavior data to obtain more association information, then combined the algorithm based on Gradient Boosting Decision Tree to learn the characteristics of different levels and obtained the classification probability for different l. Finally, we used the Gradient Boosting Decision Tree algorithm to learn the probability distribution of different feature levels to predict urban functional area classification. Real data analysis and experimental verification showed that MM-UrbanFAC model can integrate multi-modal data. The gradient lifting tree integration framework improved prediction performance compared to a single classifier. It can effectively integrate the results of multiple models and accurately classify urban functional areas, providing reference for tourism recommendation, urban land planning, and urban construction.
Note: Please discuss with our team before submitting this abstract to the college. This Abstract or Synopsis varies based on student project requirements.
Did you like this final year project?
To download this project Code with thesis report and project training... Click Here