The complexity of machine learning models tends to be accompanied by regularization, which is expected to decrease. The complexity of neural network models is unique in that they simultaneously face the challenge of gradient-based estimation. Regularization with L1-norm-based shrinkage is more popular because it can select model parameters, but it has been proven that it cannot be effectively applied to gradient-based estimation. This study proposes the development of shrinkage regularization (LASSO) by modifying the gradient to maintain its effectiveness in gradient-based estimation, which is referred to as modified LASSO (mLASSO). The mLASSO regularization is developed based on knowledge-driven to perform nodes selection. The mLASSO regularization allows for partially or fully unselection of a node based on the connections it makes. Our simulation study demonstrates the consistency of mLASSO regularization, which remains unaffected by variations in the activation function used, the type of data distribution, or the number of nodes in the hidden layer, which is very important in NN model architecture. It proves its effectiveness in reducing the value of the model goodness criterion. Based on empirical application results, mLASSO regularization is considered capable of overcoming the possibility of overfitting in NN models so that the NN-mLASSO model prediction results are not affected by outliers in the training data. Furthermore, mLASSO regularization has a significant influence on the model's goodness-of-fit criteria. This effectiveness concludes that mLASSO regularization can overcome the possibility of overfitting in NN models and guarantees significantly lower model complexity.
扫码关注我们
求助内容:
应助结果提醒方式:
