Monday, November 13, 2023

ML in finance needs explainability or will fail



JP Morgan has phased out a model that leverages machine learning technology for foreign exchange algorithmic execution, citing issues with data interpretation and the complexity involved.

The US bank had implemented what it calls a deep neural network for algo execution (DNA), which uses a machine learning framework to optimise order placement and execution styles to minimise market impact. Launched in 2019, JP Morgan said at the time that the move would replicate reinforcement learning

FX market news 

Interesting story that JP Morgan is taking a step back from machine learning. The reason is not that it did not work, but that it was too complex and too hard to interpret. This is a big issue with machine learning given the strong non-linear relationship that are not always apparent. How do you explain the results? Is there a simple narrative that explain the solution generated? Do we know what are the key features that drive results? 

There has been a movement to increase explainable AI, yet it is a big problem that has to be faced and addressed especially in finance. It starts with explaining the feature inputs that are used by the model. There needs to be a clear explanation of the technique used, and finally there needs to be a clear interpretation of the output. I have not seen the JP Morgan output, but I can tell you that explaining any ML model is not easy. Complexity must be addressed, and it takes a lot of work to make any investor comfortable with techniques that are not familiar. The burden on explainability is on the builder. Investors need to trust and verify.

No comments: