A very astute professor of finance told our graduate finance class that the best way to become a bona fide quant is NOT to get a Ph.D. in Finance! It is better, he said, to get a Ph.D. in statistics, applied mathematics, or even physics. Why? Because a Ph.D in Finance is generally not sufficiently quantitative. A quant needs a strong background in Stochastic Calculus.
“Quants for Hire?”
Our company has been described as a “quants for hire” firm. That is flattering. While we currently have 4 folks with master’s of science degrees (and one close to finishing a master’s) what we do is probably more accurately described as “quant-like” or “quant-lite” software and services. However “Quants for Hire” definitely has a nice succinct ring to it.
Quant-like Tangents to Financial Learning
Most of our quant-like work has been fairly vanilla — back testing trading strategies in Excel, Monte Carlo simulations (also in Excel), factor analysis, options strategy analysis. So far our clients like Excel and are not very interested in R. The main application of R has been to double-check our Excel back tests!
We have attracted fairly sophisticated clients. They seem reasonably comfortable about talking about viewing portfolios as unit vectors that can be linearly combined. They tend to understand correlation matrices, Sortino ratios, and in some cases even relate to partial derivatives and gradients. But they tend to push back on explanations involving geometric Brownian motion, Ito’s lemma, and the finer points of Black-Scholes-Merton. They do, however, appear to appreciate that we “know our stuff.”
I’ve got a decent set of R skills, but I’m looking to take them to the next level. I’m taking a page from my professor in tackling non-financial quantitative problems. My current problem du jour image compression. I came up with an R script that achieves very high compression levels for lossy compression. It is shorter than 200 lines commented and shorter than 100 lines when stripped of comments and blank (formatting) lines.
It can easily achieve 20X or greater compression, albeit with a loss in quality. In my initial tests my R algorithm (IC_DXB1.1) was somewhat comparable to JPEG (GIMP 2.8) at 20X compression, though I the JPEG clearly looks better in general. I also found an elegant R compressor that is extremely compact R code… the kernel is about 5 lines! Let’s call this SVD (singular value decomposition) for reference. So here’s the bake off results (all ~20X compressed to ~1.5KB):
JPEG: IC_DXB1.1:
What’s interesting to me is that each algorithm uses radically different approaches. JPEG uses DCT (discrete cosine transform) plus a frequency “mask” or filter that reduces more and more high-frequency components to achieve compression. My ic_dxb1.1 algorithm uses a variant of B-splines. The SVD approach uses singular value decomposition from linear algebra.
Obviously tens of thousands of hours have been invested in JPEG encoding. And, unfortunately, 99%+ of JPEG images are not as compact as they could be due to a series of patent disputes around arithmetic coding. Even thought the patents have all (to the best of my knowledge) expired, there is simply too much inertia behind the alternative Huffman coding at the present. It is worth noting that my analysis of all 3 algorithms is based on Huffman coding for consistency. All three approaches could ultimately use either Huffman or arithmetic coding.
So this Image Stuff Relates to Finance How?
Another of my professors explained that, fundamentally, finance is about information. One set of financial interview questions start with the premise that you have immediate (light-speed, real-time) access to all public information. Generally how would you make use of this information to make money trading? Alternatively you are to assume (correctly) that information costs money… how would your prioritize your firm’s information access? How important is frequency and latency?
Having boat loads of real-time data and knowing what to do with it are two different things. I use R to back test strategies, because it easy to write readable R code with a low bug rate. If I had to implement those strategies in a high-frequency trading environment, I would not use R, I would likely use C or C++. R is fast compared to Excel (maybe 5X faster), but is slow compared to good C/C++ implementations (often 100X slower).
My thinking is that while knowledge is important, so is creativity. By dabbling in areas outside of my “realm of expertise”, I improve my knowledge while simultaneously exercising my creativity.
Both signal processing and quant finance can reasonably be viewed as signal processing problems. Signal processing and information theory are closely related. So I would argue that developing skills in one area is cross-training skills in the other… and with greater opportunity for developing creativity. Finance is inextricably linked to information.
The Future of Finance Requires Disruptive (Software) Technology
It aint gonna be pretty for traditional financial advisors, hybrid advisors, broker/dealers, etc. Not with the rapid market acceptance of robo advisors.
Robo advising will have at least three important disruptive impacts:
- Accelerating downward pressure on advisory fees
- Taking of market share and AUM
- Increasing market demand for investment tax management services such as tax-loss harvesting
Are you ready for the rise of the bots? We at Sigma1 are, and we are looking forward to it. That is because we believe we have the software and skills to make robo advisors work better. And we are not resting on our laurels — we are focusing our professional development on software, computer science, advanced mathematics, information theory, and the like.