generalization error of a machine learning task and the effect of an increase in model complexity on the generalization error.
The generalization error means that our machine learning model how accurately predicted unseen data. Because we build our model with a limited amount of sample data, generalization error plays an important role when we think about the accuracy of the model.
The generalization error depends on the characteristics of the training data when we fit the model. Those characteristics are the bias and variance of the fitted data set. These two say how generalized our model is. If the bias is high, our model under fitted on training data. so here our model is losing the generalization feature. In the same way, if the model has a high variance, then our model is over-fitted to the training data, this is also more specific about each data, so when working with unseen data, this also gives the worst prediction.
To reduce generalization error (over-fitting and under-fitting ) we can consider model complexity
Will consider simple liner model
Here this simple linear model bias is high because of a lack of correct prediction and variance is low because predicated data not varying much.so It looks like under fitted
so if we go for a complex polynomial model then that looks
This model has very low bias because of a lack of false prediction and high variance. so it looks like over fitted. Therefore when we trying to balance one characteristic another one-act vise versa
So if model complexity increases then there is a bias-variance tradeoff to understand further
this picture clearly show that how Model complexity and generalization error changes because of bias variance tradeoff
To summarize, we have to clearly understand the generalizability characteristic of our data set and consequently we have to choose the correct complex level to model and maintain the balance between bias and variance.