Python Deep Learning Projects

Abstract:

This paper trains a deep convolutional neural network with low-bitwidth weights and activations. The quantizer’s non-differentiability makes optimizing a low-precision network difficult, resulting in accuracy loss. Progressive quantization, stochastic precision, and joint knowledge distillation can improve network training. We propose two progressive quantization schemes to find good local minima. First optimize a network with quantized weights, then quantize activations. Unlike traditional methods that optimize them simultaneously. A second progressive quantization scheme reduces bitwidth from high-precision to low-precision during training. Second, to reduce the multi-round training burden, we propose a one-stage stochastic precision strategy to randomly sample and quantize sub-networks while keeping other parts in full-precision. Finally, we jointly train a full-precision and low-precision model using a novel learning scheme. The full-precision model guides low-precision model training and improves low-precision network performance. Extensive experiments on CIFAR-100 and ImageNet demonstrate the methods’ efficacy.

Note: Please discuss with our team before submitting this abstract to the college. This Abstract or Synopsis varies based on student project requirements.

Did you like this final year project?

To download this project Code with thesis report and project training... Click Here

You may also like: