Machine learning is the darling of data science and rightly so. It has truly advanced our ability to make accurate predictions but there is still an issue of how ML procedure make their predictions. There is more complexity than traditional tools such as linear regression. For asset management, ML creates two problems: (1) understanding how result were generated and (2) explain how decisions were made for both compliance and investors.
There are solutions to these problems through advances in interpretable and explainable AI. One of the key findings of the work in these two areas of ML is that accuracy does not have to be forsaken by choosing processes and models that may be less complex.
Interpretable AI, also called symbolic AI (SAI) employs less complex ML procedures which are easier to read interpret. Interpretable AI will focus on traditional techniques like rules-based learning in the form of decision trees. Because there are rules, it is easier to provide some story about how forecasts are made. Each rule can be examined, and rules can be added or dropped to find the marginal value of any change.
Explainable AI, also called XAI, will use more complex ML but attempt to explain it. More complex ML systems that are not rules-based have to rely on explainable AI where there is a focus on the value of features and outputs used in a black box. With XAI, tools like Shapley Additive exPlanations (SHAP) values are used to associate the importance of features with the explanation of a forecast. See "Machine Learning: Explain It or Bust" for more details.
Interpretable and explainable AI represent an age-old problem for all systematic managers. Even simple models have to be explained and provide context. For example, trend-followers can have very different return and risk profiles. The burden is on the managers to explain why they differ from others and when they will or will not make money. There is always interpretation and explanation issues.
See also:
No comments:
Post a Comment