Try and explain artificial intelligence to someone who is not well-versed in the mechanics. It is not easy. It is not even easy for a strong quantitative analyst. You may know the math. You may be able to effectively program the model. You may understand all of the data inputs, but you may not explain what the model is doing. The argument that you just have to live with "hidden layers" may not cut it from a risk management perspective or if performance goes wrong.
Help is supposed to be on the way through a new area of research called explainable AI or more precisely interpretable AI. I was hoping for a new form of AI clarity when I started reading this research. No such luck. (See "Peaking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)", IEEE Access.) Investors interesting in using AI and machine learning will have to do the heavy work of learning, testing, and expending the sweat equity to understanding what is going.
So, what is explainable AI or xAI? There is no generally accepted definition but is referred to the movement to increase transparency and trust within AI. It is a broad set of techniques or approaches to reduce the obscurity associated with "black box" techniques. There is no simple model that will breakdown the drivers of a model, but looks at simplification, marginal impact and scenario examples to increase our understanding of the model and output. It has a goal of increasing explainability without decreasing forecast accuracy.
More complexity reduces explainability. Hence, interpretable models usually come at the cost of reduced accuracy. The idea behind explainable AI is to reduce complexity or show the additive value of complexity. This does not reduce the difficulties of properly using data or provide some magic easy to read procedure but tries to focus on procedures to link data relations with prediction. There is less of an issue of "black boxes" but complex boxes that focus on non-linear or deep relationships not immediately obvious. The benefit of AI is in its ability to find what in not immediately obvious.
UnfortunateIy, I would say there are still difficulties associated with "explainable" regression. The strength of visualization software and simulation tools help, but any increase in interpretability will be related to basic knowledge and increased usage.
Usage in many organizations is generational. Old management must be replaced with new management that have employed these techniques as part of their normal decision-making toolkit. This knowledge transfer is faster for smaller organization (hedge funds) and slower for traditional money managers. So for the near-term, crack open the textbook, learn the coding, and runs some models, there is not easy alternative or "free lunch".
So, what is explainable AI or xAI? There is no generally accepted definition but is referred to the movement to increase transparency and trust within AI. It is a broad set of techniques or approaches to reduce the obscurity associated with "black box" techniques. There is no simple model that will breakdown the drivers of a model, but looks at simplification, marginal impact and scenario examples to increase our understanding of the model and output. It has a goal of increasing explainability without decreasing forecast accuracy.
More complexity reduces explainability. Hence, interpretable models usually come at the cost of reduced accuracy. The idea behind explainable AI is to reduce complexity or show the additive value of complexity. This does not reduce the difficulties of properly using data or provide some magic easy to read procedure but tries to focus on procedures to link data relations with prediction. There is less of an issue of "black boxes" but complex boxes that focus on non-linear or deep relationships not immediately obvious. The benefit of AI is in its ability to find what in not immediately obvious.
UnfortunateIy, I would say there are still difficulties associated with "explainable" regression. The strength of visualization software and simulation tools help, but any increase in interpretability will be related to basic knowledge and increased usage.
Usage in many organizations is generational. Old management must be replaced with new management that have employed these techniques as part of their normal decision-making toolkit. This knowledge transfer is faster for smaller organization (hedge funds) and slower for traditional money managers. So for the near-term, crack open the textbook, learn the coding, and runs some models, there is not easy alternative or "free lunch".
No comments:
Post a Comment