Joined: Sep 16, 2019
Posted: Jan 30, 2022 08:45 PM
Msg. 1 of 2
There seems to be a lack of research regarding the use of tick-by-tick data as input to machine learning models. Has anyone experimented with machine learning and tick-by-tick data? I’ve trained an LSTM with about 2 years worth using tick-by-tick data with DTN which fit a certain criteria such as float, volume, price. The result is a model with around 69.9% accuracy. A naïve model which predicts that the bid price will be the same price as the last tick has an accuracy of around 65%. I’m wondering if I can increase my model’s accuracy through feature engineering. Could anyone share research papers regarding machine learning and tick-by-tick data? Does anyone have any insight regarding data transformations that can be applied to financial data which may result in increased accuracy if used as a feature in machine learning models?
Interests & Tools: Machine Learning, Neural Networks, Deep Learning, Python, Java, Trading, Small Caps, Interactive Brokers.
Edited by keohir808 on Jan 30, 2022 at 08:50 PM
Joined: May 7, 2004
Posted: Jan 31, 2022 11:10 AM
Msg. 2 of 2
Yes, I experimented with this a few years ago. Your questions are relevant and insightful, but I don't have much useful information to offer in reply.
I haven't seen many published papers on the subject in recent years. Take that with a grain of salt, though, because I'm not looking actively enough. Possibly if the technique has been applied successfully, it hasn't been discussed in public for the obvious reasons. Hopefully someone else will reply with better information.
In general, I hit the same roadblocks you did. It's hard to choose the right network architecture (financial data isn't statistically stationary, so I wasn't able to design either recurrent or convolutional networks that were consistently successful). Raw tick-by-tick data has so much variability along so many dimensions that I suspect feature engineering is necessary, but that's a major research project in its own right. Techniques currently being used for natural language processing are probably where I'd start if I were to look at this again today.
Possibly the most fundamental problem I ran into is that it doesn't seem workable to use a scalar value to measure outcomes, so anything based on simple gradient descent is problematic. I think a practical outcome measurement must be at least three-dimensional -- it needs to include return, risk, and capital management. Arguably more, but the need for those three is easy to understand.