Abstract:
Deep neural networks (DNNs) solve complex machine learning issues. Function representation’s expressiveness helped them succeed. Since piecewise linear neural networks (PLNNs) use linear pieces to model complex patterns, the number of linear regions is a natural measure of their expressive power. This article theoretically analyzes PLNN expressive power by counting and bounding linear regions. The number of linear regions of PLNNs with rectified linear units (ReLU PLNNs) is first refined. Next, we analyze PLNNs with general piecewise linear (PWL) activation functions and calculate the maximum number of linear regions in single-layer PLNNs. The upper and lower bounds on multilayer PLNN linear regions scale polynomially with the number of neurons at each layer and pieces of PWL activation function but exponentially with the number of layers. Deep PLNNs with complex activation functions outperform shallow ones when computing highly complex and structured functions, which partially explains their classification and function fitting improvements.
Note: Please discuss with our team before submitting this abstract to the college. This Abstract or Synopsis varies based on student project requirements.
Did you like this final year project?
To download this project Code with thesis report and project training... Click Here