Monday, June 24, 2024

Accuracy, complexity, robustness, variance, and bias in one chart

 

from Pedma @pedma7


One key advancement with machine learning and data science has been the increased focus on the trade-offs associated with model building and testing. There is a trade-off between accuracy and complexity as well as robustness. If we add more complexity, our robustness will be lower. There may be less bias but there is likely to be higher variance from overfitting. On the other hand, if we lower the complexity, there is more likelihood of higher bias.  We will likely be under-fitting. The impact between training and testing is clear. We can improve our training model through adding complexity which should  increase accuracy, but this will result in overfitting. However, when will use this model on a test sample our accuracy will likely decline.  The problem can become worse when we look shifted or modified sample.

The problem is present through finance if we are trying to make predictions. Models that work in training through academic papers fail out-of-sample. Trend models that add more features underperform simple models when put into production. The challenge with finance is not coming up with new models or features but learning how to better use the features that already exist.

No comments:

Post a Comment