What is bias-variance tradeoff?
The bias-variance tradeoff in machine learning refers to the inversely proportional relationship between bias and variance. An increase in bias causes a reduction in variance and an increase in variance causes a reduction in bias.
The prediction error of a machine learning model, i.e., the difference between the actual truth and the trained model, can be broken down as the sum of two error terms: the bias term and the variance term. Because the total error is the sum of these two error terms, there is a trade-off between the two.
The bias is a measurement of how effectively the model that you chose is able to approximate the ground truth in a best-case scenario. Variance is essentially a measurement of how tightly clustered the trained models happen to be around the best-case scenario in question.
If the variance is low, you can understand that all the trained models are very close to the best-case scenario. But if the variance is high, it means that all the models are spread out.
Bias is the error that is introduced because of assumptions made for the purpose of modeling a relatively complex real-life problem by making use of an approximate simple model.
Most flexible models tend to make less assumptions about the data. This results in a lower amount of bias. Such highly flexible models tend to have higher variance, this could cause overfitting.
The variance is essentially the amount by which the predictions or results of the machine learning model changes when a different training dataset is used. Changing the training data will cause changes in the machine learning model. If even minor changes in the training data cause large changes in the machine learning model, you could deduce that the model suffers from high variance.
Why is the bias-variance tradeoff important?
Because bias and variance are inversely proportional and change in the opposite direction when the degree of flexibility of the machine learning model is changed, a tradeoff exists. So, as bias increases, variance decreases and vice versa.
If the model becomes more complex or flexible the bias initially decreases faster than the variance increases. But, after a certain point, the variance goes up significantly with a rise in the flexibility, but there is not much of an impact on the bias.
The tradeoff in complexity is the cause of the tradeoff between bias and variance. If an algorithm is too simple, it may have a high bias and a low variance, which makes it prone to errors. If the algorithm is too complex, it would have a high variance and a low bias, here, new entries would not perform very well.
Essentially, you’re facing a problem of overfitting versus underfitting.
The goal is to find a machine learning model with optimum complexity where bias and variance are both low, giving you a good fit. This is the sweet spot where your machine learning model performs well between the errors introduced by the bias and the variance.
How do you calculate bias-variance tradeoff?
It is possible to use k-fold cross validation and apply GridSearch on the parameters in order to measure the bias-variance trade-off. This technique allows you to compare scores across the various tuning options specified by you, thus making it possible for you to choose the model that achieves the best test score.
How to fix High Variance?
To fix high variance, you first need to identify it. When a model is high in variance, it tends to perform extremely well on the training dataset but does not hold its own when a testing or cross-validation dataset is used. So, if the training accuracy is high and the test accuracy is low, you can say that the model has high variance.
You can fix high variance by cutting down on the number of features in the model. There are multiple techniques that you could use to identify the features that add little to no value to the model and which features are rather important for the model. You could decrease the degree of the polynomial to help make the model less complex and solve the problem of high variance.
Regularization is widely used to solve the issue of overfitting.
How to fix High Bias?
A model with high bias will perform badly on the training data set as well as the test dataset. Such models have low accuracy and f1 scores because there is a vast difference between the predicted and actual values.
To fix high bias, you increase the features of the model or even carry out feature engineering which would enable you to add more meaningful factors to the data. This could help the model understand data in a more effective manner. You could also increase the complexity of the model by increasing the degree of the polynomial to reduce the bias, but if you do this beyond a certain point the cross-validation error starts increasing. You could even decrease the alpha parameter of regularization.
How do you balance bias and variance?
You need to balance the bias and the variance to find a good fit. When a machine learning model starts increasing in complexity, there is a point beyond which the cross-validation error begins to rise. At this point, the model needs to stop increasing in complexity. It should use all the parameters that are defined by that point in the curve.
This tends to be the point at which the bias and the variance curves intersect with each other to create the optimal model complexity point.
This is the point at which the model has low bias and low variance. It does not lead to overfitting or underfitting in the model, but results in a good fit.
Also read: Approximation Error