Quant Cross-Training

A very astute professor of finance told our graduate finance class that the best way to become a bona fide quant is NOT to get a Ph.D. in Finance!  It is better, he said, to get a Ph.D. in statistics, applied mathematics, or even physics. Why? Because a Ph.D in Finance is generally not sufficiently quantitative. A quant needs a strong background in Stochastic Calculus.

“Quants for Hire?”

Our company has been described as a “quants for hire” firm. That is flattering. While we currently have 4 folks with master’s of science degrees (and one close to finishing a master’s) what we do is probably more accurately described as “quant-like” or “quant-lite” software and services. However “Quants for Hire” definitely has a nice succinct ring to it.

Quant-like Tangents to Financial Learning

Most of our quant-like work has been fairly vanilla — back testing trading strategies in Excel, Monte Carlo simulations (also in Excel), factor analysis, options strategy analysis. So far our clients like Excel and are not very interested in R. The main application of R has been to double-check our Excel back tests!

We have attracted fairly sophisticated clients.  They seem reasonably comfortable about talking about viewing portfolios as unit vectors that can be linearly combined.  They tend to understand correlation matrices, Sortino ratios, and in some cases even relate to partial derivatives and gradients. But they tend to push back on explanations involving geometric Brownian motion, Ito’s lemma, and the finer points of  Black-Scholes-Merton. They do, however, appear to appreciate that we “know our stuff.”

I’ve got a decent set of R skills, but I’m looking to take them to the next level. I’m taking a page from my professor in tackling non-financial quantitative problems. My current problem du jour image compression. I came up with an R script that achieves very high compression levels for lossy compression.  It is shorter than 200 lines commented and shorter than 100 lines when stripped of comments and blank (formatting) lines.

It can easily achieve 20X or greater compression, albeit with a loss in quality. In my initial tests my R algorithm (IC_DXB1.1) was somewhat comparable to JPEG (GIMP 2.8) at 20X compression, though I the JPEG clearly looks better in general. I also found an elegant R compressor that is extremely compact R code… the kernel is about 5 lines! Let’s call this SVD (singular value decomposition) for reference. So here’s the bake off results (all ~20X compressed to ~1.5KB):

JPEG:                                                             IC_DXB1.1:

20X Compressed with JPEG

JPEG 20X Compressed

20X Compressed with IC_DXB1.1

IC_DXB1.1 20X Compressed

20X Compressed with SVD in R

SVD 20X Compressed

 

 

 

 

 

 

 

 

 

What’s interesting to me is that each algorithm uses radically different approaches. JPEG uses DCT (discrete cosine transform) plus a frequency “mask” or filter that reduces more and more high-frequency components to achieve compression. My ic_dxb1.1 algorithm uses a variant of B-splines. The SVD approach uses singular value decomposition from linear algebra.

Obviously tens of thousands of hours have been invested in JPEG encoding. And, unfortunately, 99%+ of JPEG images are not as compact as they could be due to a series of patent disputes around arithmetic coding. Even thought the patents have all (to the best of my knowledge) expired, there is simply too much inertia behind the alternative Huffman coding at the present. It is worth noting that my analysis of all 3 algorithms is based on Huffman coding for consistency.  All three approaches could ultimately use either Huffman or arithmetic coding.

 

So this Image Stuff Relates to Finance How?

Another of my professors explained that, fundamentally, finance is about information. One set of financial interview questions start with the premise that you have immediate (light-speed, real-time) access to all public information. Generally how would you make use of this information to make money trading? Alternatively you are to assume (correctly) that information costs money… how would your prioritize your firm’s information access?  How important is frequency and latency?

Having boat loads of real-time data and knowing what to do with it are two different things. I use R to back test strategies, because it easy to write readable R code with a low bug rate. If I had to implement those strategies in a high-frequency trading environment, I would not use R, I would likely use C or C++. R is fast compared to Excel (maybe 5X faster), but is slow compared to good C/C++ implementations (often 100X slower).

My thinking is that while knowledge is important, so is creativity. By dabbling in areas outside of my “realm of expertise”, I improve my knowledge while simultaneously exercising my creativity.

Both signal processing and quant finance can reasonably be viewed as signal processing problems. Signal processing and information theory are closely related. So I would argue that developing skills in one area is cross-training skills in the other… and with greater opportunity for developing creativity. Finance is inextricably linked to information.

The Future of Finance Requires Disruptive (Software) Technology

It aint gonna be pretty for traditional financial advisors, hybrid advisors, broker/dealers, etc. Not with the rapid market acceptance of robo advisors.

Robo advising will have at least three important disruptive impacts:

  1. Accelerating downward pressure on advisory fees
  2. Taking of market share and AUM
  3. Increasing market demand for investment tax management services such as tax-loss harvesting

Are you ready for the rise of the bots? We at Sigma1 are, and we are looking forward to it. That is because we believe we have the software and skills to make robo advisors work better. And we are not resting on our laurels — we are focusing our professional development on software, computer science, advanced mathematics, information theory, and the like.

Advertisement

The Equation Everyone in Finance Should Know (MV Optimization: How To, Part 2)

As the previous post shows, it all starts with…

In order get close to bare-metal access to your compute hardware, use C.  In order to utilize powerful, tested, convex optimization methods use CVXGEN.  You can start with this CVXGEN code, but you’ll have to retool it…

  • Discard the (m,m) matrix for an (n,n) matrix. I prefer to still call it V, but Sigma is fine too.  Just note that there is a major difference between Sigma (the covariance-variance matrix) and sigma (individual asset-return variances matrix; the diagonal of Sigma).
  • Go meta for the efficient frontier (EF).  We’re going to iteratively generate/call CVXGEN with multiple scripts. The differences will be w.r.t the E(Rp).
  • Computing Max: E(Rp)  is easy, given α.  [I’d strongly recommend renaming this to something like expect_ret comprised of (r1, r2, … rn). Alpha has too much overloaded meaning in finance].
  • [Rmax] The first computation is simple.  Maximize E(Rp) s.t constraints.  This is trivial and can be done w/o CVXGEN.
  • [Rmin] The first CVXGEN call is the simplest.  Minimize σp2 s.t. constraints, but ignoring E(Rp)
  • Using Rmin and Rmax, iteratively call CVXGEN q times (i=1 to q) using the additional constraint s.t. Rp_i= Rmin + (i/(q+1)*(Rmax-Rmin). This will produce q+2 portfolios on the EF [including Rmin and Rmax].  [Think of each step (1/(q+1))*(Rmax-Rmin) as a quantization of intermediate returns.]
  • Present, as you see fit, the following data…
    • (w0, w1, …wq+1)
    • [ E(Rp_0), …E(Rp_(q+1)) ]
    • [ σ(Rp_0), …σ(Rp_(q+1)) ]

My point is that —  in two short blog posts — I’ve hopefully shown how easily-accessible advanced MVO portfolio optimization has become.  In essence, you can do it for “free”… and stop paying for simple MVO optimization… so long as you “roll your own” in house.

I do this for the following reasons:

  • To spread MVO to the “masses”
  • To highlight that if “anyone with a master’s in finance and computer can do MVO for free” to consider their quantitative portfolio-optimization differentiation (AKA portfolio risk management differentiation), if any
  • To emphasize that this and the previous blog will not greatly help with semi-variance portfolio optimization

I ask you to consider that you, as one of the few that read this blog, have a potential advantage.  You know who to contact for advanced, relatively-inexpensive SVO software. Will you use that advantage?

Clover Patterns Show How Portfolios Manage Risk

Covariance illustration

Illustration of Classic Covariance.

The red and green “clover” pattern illustrates how traditional risk can be modeled.  The red “leaves” are triggered when both the portfolio and the “other asset” move together in concert.  The green leaves are triggered when the portfolio and asset move in opposite directions.

Each event represents a moment in time, say the closing price for each asset (the portfolio or the new asset).  A common time period is 3-years of total-return data [37 months of price and dividend data reduced to 36 monthly returns.]

Plain English

When a portfolio manager considers adding a new asset to an existing portfolio, she may wish to see how that asset’s returns would have interacted with the rest of the portfolio.  Would this new asset have made the portfolio more or less volatile?  Risk can be measured by looking at the time-series return data.  Each time the asset and the portfolio are in the red, risk is added. Each time they are in the green, risk is subtracted.  When all the reds and greens are summed up there is a “mathy” term for this sum: covariance.  “Variance” as in change, and “co” as in together. Covariance means the degree to which two items move together.

If there are mostly red events, the two assets move together most of the time.  Another way of saying this is that the assets are highly correlated. Again, that is “co” as in together and “related” as in relationship between their movements. If, however, the portfolio and asset move in opposite directions most of the time, the green areas, then the covariance is lower, and can even be negative.

Covariance Details

It is not only the whether the two assets move together or apart; it is also the degree to which they move.  Larger movements in the red region result in larger covariance than smaller movements.  Similarly, larger movements in the green region reduce covariance.  In fact it is the product of movements that affects how much the sum of covariance is moved up and down.  Notice how the clover-leaf leaves move to the center, (0,0) if either the asset or the portfolio doesn’t move at all.  This is because the product of zero times anything must be zero.

Getting Technical: The clover-leaf pattern relates to the angle between each pair of asset movements.  It does not show the affect of the magnitude of their positions.

If the incremental covariance of the asset to the portfolio is less than the variance of the portfolio, a portfolio that adds the asset would have had lower overall variance (historically).  Since there is a tenancy (but no guarantee!) for asset’s correlations to remain somewhat similar over time, the portfolio manager might use the covariance analysis to decide whether or not to add the new asset to the portfolio.

Semi-Variance: Another Way to Measure Risk

 

Semi-variance visualization

Semi-variance Visualization

After staring at the covariance visualization, something may strike you as odd — The fact that when the portfolio and the asset move UP together this increases the variance. Since variance is used as a measure of risk, that’s like saying the risk of positive returns.

Most ordinary investors would not consider the two assets going up together to be a bad thing.  In general they would consider this to be a good thing.

So why do many (most?) risk measures use a risk model that resembles the red and green cloverleaf?  Two reasons: 1) It makes the math easier, 2) history and inertia. Many (most?) textbooks today still define risk in terms of variance, or its related cousin standard deviation.

There is an alternative risk measure: semi-variance. The multi-colored cloverleaf, which I will call the yellow-grey cloverleaf, is a visualization of how semi-variance is computed. The grey leaf indicates that events that occur in that quadrant are ignored (multiplied by zero).  So far this is where most academics agree on how to measure semi-variance.

Variants on the Semi-Variance Theme

However differences exist on how to weight the other three clover leaves.  It is well-known that for measuring covariance each leaf is weighted equally, with a weight of 1. When it comes to quantifying semi-covariance, methods and opinions differ. Some favor a (0, 0.5, 0.5, 1) weighting scheme where the order is weights for quadrants 1, 2, 3, and 4 respectively. [As a decoder ring Q1 = grey leaf, Q2 = green leaf, Q3 = red leaf, Q4 = yellow leaf].

Personally, I favor weights (0, 3, 2, -1) for the asset versus portfolio semi-covariance calculation.  For asset vs asset semi-covariance matrices, I favor a (0, 1, 2, 1) weighting.  Notice that in both cases my weighting scheme results in an average weight per quadrant of 1.0, just like for regular covariance calculations.

 

Financial Industry Moving toward Semi-Variance (Gradually)

Semi-variance more closely resembles how ordinary investors view risk. Moreover it also mirrors a concept economists call “utility.” In general, losing $10,000 is more painful than gaining $10,000 is pleasurable. Additionally, losing $10,000 is more likely to adversely affect a person’s lifestyle than gaining $10,000 is to help improve it.  This is the concept of utility in a nutshell: losses and gains have an asymmetrical impact on investors. Losses have a bigger impact than gains of the same size.

Semi-variance optimization software is generally much more expensive than variance-based (MVO mean-variance optimization) software.  This creates an environment where larger investment companies are better equipped to afford and use semi-variance optimization for their investment portfolios.  This too is gradually changing as more competition enters the semi-variance optimization space.  My guestimate is that currently about 20% of professionally-managed U.S. portfolios (as measured by total assets under management, AUM) are using some form of semi-variance in their risk management process.  I predict that that percentage will exceed 50% by 2018.

 

Data Science: Shrinking Big Data into Meaningful Data

In this post I explain how less is more when it comes to using “big data.”  The best data is concise, meaningful, and actionable. It is both an art and a science to turn large, complex data sets into meaningful, useful information. Just like the later paintings of Monet capture the impression of beauty more effectively than a mere photograph, “small data” can help make sense of “big data.”

Monet Painting of the sun through the fog in London

Claude Monet, London

There is beauty in simplicity, but capturing simplicity is not simple. A young child’s drawings are simple too, but they very unlikely to capture light and mood like Monet did.

Worry not. There will be finance and math, but I will save the math for last, in an attempt to retain the interest of non “mathy” readers.

The point of discussing impressionist painting is show that reduction — taking things away — can be a powerful tool.  In fact, filtering out “noise” is both useful and difficult. A great artist can filter out the noise without losing the fidelity of the signal.  In this case, the “signal” is emotion and color and light as as perceived by a master painter’s mind.

 

Applying Impressionism to Finance

Massive amounts of data are available to the financial professional. Two questions I have been asking at Sigma1 since the beginning are 1) How to use “Big Compute” to crunch that data into better portfolios? 2) How to represent that data to humans — both investment pros and lay folk whose money is being invested?  After considerable thought, brainstorming, listening, and learning, I think we are beginning to construct a preliminary picture of how to do that — literally.

Portfolio Asset Relationships

Relationships between Portfolio Assets

While not a beautiful as a Monet painting, the picture above is worth a thousand words (and likely many thousands of dollars over time) to me.  The assets above constitute all of the current non-CASH building blocks of my personal retirement portfolio.  While simple, the above image took considerable software development effort and literally millions of computations to generate [millions is very do-able with computers].

This simple-looking image conveys complex information in an easy-to-understand form. The four colors — red, green, blue, and purple — convey four asset types: fixed income, US stocks, international stocks, and convertible securities. The angle between any two asset lines conveys the relative correlation between the pair.  In portfolio construction larger angles are better.  Finally the length of the line represents the “effectiveness” with which each asset represents its “angular position” within the portfolio (in addition to other information).

With Powerful Data, First Comes Humility, Next Comes Insight

I have applied the same visualizations to other portfolios, and I see that, according to my software, many of the assets in professionally-managed portfolios exhibit superior “robustness” to my own.  As someone who prides myself in having a kick-ass portfolio, this information is humbling, and took some time to absorb from an ego standpoint.  But, having gotten over it, I now see potential.

I have seen portfolios that have a significantly wider angle than my current portfolio.  What does this mean to me?  It means I will begin looking for assets to augment my personal portfolio.  Before I do that let me share some other insights. The plot combines covariance matrix data for the 16 assets in the portfolio, as well as semi-variance data for each asset.  Without getting to “mathy” yet, the data visualization software reduces 136 pieces of data down to 32 (excluding color). The covariance matrix and semi-variance calculation itself are also a reducers in that they combines 5 years monthly total-return data — 976 data points down to 120 unique covariance numbers and 16 semi-deviation numbers. Taking 976 down to 32 results in a compression ratio of 30.5:1.

Finally, as it currently stands, the visualization software and resulting plot say nothing about expected return.  The plot focuses solely on risk mitigation at the moment.  Naturally, I intend to change that.

Time for the Math and Finance — Consider Yourself Warned

I mentioned a 30.5:2 (71:2) compression ratio. Just as music and other data, other information, including financial information can be compressed.  However, only so much compression can be achieved in lossless manner.  In audio compression researchers have learned which portions of music and other audio can be “lost” without the listener telling the difference.  There is a field of psychoacoustics around doing just that — modeling what the human ear (and brain) can hear, and what gets “masked” by various physiological factors.

Even more important that preserving fidelity is extracting meaning. One way of achieving that is by removing “noise.” The visualization software performs significant computation to maintain as much angular fidelity as possible. As it optimizes angles, it keeps track of total error vis-a-vis the covariance matrix. It also keeps track of individual assets error (the reciprocal of fitness — fit versus lack of fit).

The real alchemy comes from the line-length computation.  It combines semi-variance data with various fitness factors to determine each asset line length.

Just like Mercator projections for maps incur unavoidable error when converting from a 3-D globe to a 2-D map, the portfolio asset visualizations introduce error as well.  If one thinks of just the correlation matrix and semi-variance data, each asset has a dimensionality of 8.5 (in the case of 16 assets).  Reducing from 8.5-D to 2-D is a complex process, and there are an infinite number of ways to perform such an operation!  The art and [data] science is to enhance the “signal” while stripping away the “noise.”

The ultimate goals of portfolio data visualization technology are:

1) Transform raw data into actionable insight

2) Preserve sufficient fidelity of relevant data such that the “map” can be used to reliably get to the desired “destination”

I believe that the first goal has been achieved.  I know what actions to take… trying various other securities to find those that can build a “higher-angle”, and arguably more robust, more resilient investment portfolio.

However, the jury is still out on the degree [no pun intended] to which goal #2 has or has not been achieved.  Does this simple 2-D map help portfolio builders reliably and consistently navigate the 8+ dimensional portfolio space?

What about 3-D Modelling and Visualization?

I started working with 2-D for one key reason — I can easily share 2-D images with readers and clients alike.  I want feedback on what people like and dislike about the visuals. What is easy to understand, what is not?  What is useful to them, and what isn’t?  Ironing out those details in 2-D is step 1.

Of course I am excited by 3-D. Most of the building blocks are in my head, and I can heavily leverage the 2-D algorithms.  I am, however, holding off for now. I am waiting for feedback from readers and clients alike.  I spend a lot of time immersed in the language of math, statistics, and finance.  This can create a communication gap that is best mitigated through discussion with other people with other perspectives.  I wish to focus on 2-D for a while to learn more about market needs.

That being said, it is hard to resist creating a 3-D portfolio asset visualizer. The geek in me is extremely curious about how much the error terms will reduce when given a third degree of freedom to work with.

The bottom line is: Please give me any feedback: positive, negative, technical, aesthetic, etc. This is just the start. I am extremely enthusiastic about where this journey will take me and my company.

Disclosure and Disclaimer

Securities mentioned in this post are holdings in my personal retirement accounts (e.g. 401K, IRA, Roth IRA) as of the day of initial publication of this post. The purpose of this post is to illustrate features of Sigma1 Financial software. This is NOT investment advice, and NOT a recommendation to buy, sell, or hold any securities. Please refer to the “Disclaimer” Tab of the main page of this site for further information.

Pursuing Alpha with Antivariance

A simple and marginally-effective strategy to reduce portfolio variance is by constructing an asset correlation matrix, selecting assets with low (preferably negative) correlations, and building a portfolio of low-correlation assets.  This basic strategy involves creating a set of assets whose cross-correlations (covariances) are minimized.

One reason this basic  strategy is only somewhat effective is that a correlation matrix (or covariance matrix) only provides a partial picture of the chosen investment landscape.  Some fundamental limitations include non-normal distributions, skewness, and kurtosis to name a few.  To most readers these are fancy words with varying degrees of meaning.

Personally, I often find the mathematics of the work I do seductive like a Siren’s song.  I endeavor to strike a balance between exploring tangential mathematical constructs, and keeping most of my math applied. One mental antidote to the Siren’s song of pure mathematics is to think more conceptually than mathematically by asking questions like:

What are the goals of portfolio optimization?  What elements of the investing landscape allow these goals to be achieved?

I then attempt to answer these questions with explanations that a person with a college degree but without a mathematically background beyond algebra could understand.  This approach lets me define the concept first, and develop the math later.  In essence I can temporarily free my mind of the slow, system 2 thinking generally required for math.

Recently, I came up with the concept of antivariance.  I’m sure others have had similar ideas and a cursory web search reveals that as profession poker player’s nickname.  I will layout my concept of antivariance as it relates to porfolio theory in particular and the broader concept in general.

By convention, one of the key objective of modern portfolio theory is the reduction of portfolio return variance.  The mathematical concept is the idea that by combining assets with correlations of less than 1.0, the return variance is less than the weighted sums of each asset’s individual variance.

Antivariance assumes that there are underlying patterns explain why two or more assets should be somewhat less correlated (independent), but at times negatively correlated.  Consider the affects of major hurricanes like Andrew or Katrina.  Their effects were negative for insurance companies with large exposures, but were arguably positive for companies that manufactured and supplied building materials used in the subsequent rebuilds.  I mention Andrew because there was much more and more rapid rebuilding following Andrew than Katrina.  The disparate groups of stocks of (regional) insurance versus construction companies can be considered to exhibit paired antivariance to devastating weather events.

Nicholas Nassim Taleb coined the the term antifragile, because terms such as robust simply don’t convey the exact mental connections.  I am beginning to use the term antivariance because it conveys concepts not well captured by terms like “negatively correlated”, “less correlated”, “semi-independent”, etc.   In many respects antifragile systems should exhibit antivariance characteristics, and vice versa.

The concept of antivariance can be extended to related concepts such as anticovariance and anticorrelation.

 

 

Using HALO Portfolio Optimization Software

Setting up a basic HALO optimization requires a list of asset tickers, their min and max constraints, and expected returns.  Also at least one user specified category designation is required. Below is a short example:

SPY 7% 40% 9.11% Equities
PBP 0% 15% 8.53% Equities
USMV 5% 15% 9.05% Equities
VTI 5% 25% 9.35% Equities
IJH 5% 20% 9.55% Equities
VB 3% 15% 9.81% Equities
VEA 5% 12% 8.69% International
VEU 5% 20% 9.21% International
EEM 1% 11% 10.07% International
JNK 3% 15% 6.03% Bonds
BKLN 0% 8% 3.61% Bonds
AGG 1% 9% 2.32% Bonds

Generally, it is advisable to keep the sum of the individual asset minimums below 50%, and the sum of maximums above 200%. This provides the HALO optimizer the freedom to create a wide range of optimized portfolios with different risk/reward trade offs.

The above example is a very basic configuration. In order for asset managers to specify asset-class constraints, it is necessary to tell the optimizer that the “string” is a user-defined category.  Currently this is done with a leading gastritis (*):

*Equities       25% 85%
*International  10% 30%
*Bonds          15% 45%

The above config specifies that Equities must comprise a minimum of 25% of the investment portfolio and a maximum of 85%.  As with the individual asset constraints, it is advised to provide reasonably wide latitude to the optimization algorithms to produce a diverse set of optimized portfolios.

By default, the HALO Optimizer will produce a set of portfolios optimized to:

1) minimize:
a) semi-variance, σd (the default)
b) –OR– annualized standard deviation of total return, σ

2) maximize expected return, E(R)

The default time series used for computing σ and σis end-of-month total-return deltas for the previous 36 months.  (This requires 37 months of total-return data for each security.)  The time period can be customized to use, say 60 months worth of data in the analysis.  HALO also supports using weekly closing data or even daily closing data — however I generally recommend using monthly data for a variety of reasons.  First, it speeds the computation.  Second, monthly data captures multi-day and multi-week trends, correlations, and specifically low-correlation asset optimization.  Third, monthly data is closer to the sampling period of a “typical” high-net-worth retail investor.  [That said, a case could be made for using quarterly data — which is also supported.]

Frequently HALO clients want to model newer securities that do not have 37 months of historical data.  For example, min-volatility ETFs such as SPLV, USMV, and EEMV are popular ETFs that are less than 3 years old. The HALO software suite has utilities that can statistically back fill the missing data.  The configuration of the statistical back-fill process is beyond the scope of this blog post, however it is an important and popular HALO Optimization Suite capability that so far has been used by all of Sigma1’s clients and beta testers.

Occasionally, Sigma1 clients and beta testers have had in-house funds that do not externally report their price or total return data.  For in-house funds, HALO can read client-supplied total-return data.  Naturally, HALO can include stocks, bonds, commodities, futures, and other assets with historical data into the portfolio optimization mix.

 

 

Variance, Semivariance Convergence

In running various assets through portfolio-optimization software, I noticed that for an undiversified set of assets there can be wide differences between portfolios with the highest Sharpe ratios versus portfolios with the Sortino ratios.  Further, if the efficient frontier of ten portfolios is constructed (based on mean-variance optimization) and sorted according to both Sharpe and Sortino ratios the ordering is very different.

If, however, the same analysis is performed on a globally-diversified set of assets the portfolios tend to converge.  The broad ribbon of of the 3-D efficient surface seen with undiversfied assets narrows until it begins to resemble a string arching smoothly through space.  The Sharpe/Sortino ordering becomes very similar with ranks seldom differing by more than 1 or 2 positions.  Portfolios E and F may rank 2 and 3 in the Sharpe ranking but rank      2 and 1 in the Sortino ranking, for example.

Variance/Semivariance divergence is wider for optimized portfolios of individual stocks.  When sector-based stock ETFs are used instead of individual stocks, the divergence narrows.  When bond- and broad-based index ETFs are optimized, the divergence narrows to the point that it could be considered by many to be insignificant.

This simple convergence observation has interesting ramifications.  First, a first-pass of faster variance optimization can be applied, followed by a slower semivariance-based refinement to more efficiently achieve a semivariance-optimized portfolio.  Second, semivariance distinctions can be very significant for non-ETF (stock-picking) and less-diversified portfolios.  Third, for globally-diversified, stock/bond, index-EFT-based portfolios, the differences between variance-optimized and semivariance-optimized portfolios are extremely subtle and minute.

 

 

A Choice: Perfectly Wrong or Imperfectly Right?

In many situations good quick action beats slow brilliant action.   This is especially true when the “best” answer arrives too late.  The perfect pass is irrelevant after the QB is sacked, just as the perfect diagnosis is useless after the patient is dead.  Lets call this principle the temporal dominance threshold, or beat the buzzer.

Now imagine taking a multiple-choice test such as the SAT or GMAT.   Let’s say you got every question right, but somehow managed to skip question 7.   In the line for question #7 you put the answer to question #8, etc.   When you answer the last question, #50, you finally realize your mistake when you see one empty space left on the answer sheet… just as the proctor announces “Time’s up!”   Even thought you’ve answered every question right (except for question #7), you fail dramatically.   I’ll call this principle query displacement, or right answer/wrong question.

The first scenario is similar to the problems of high-frequency trading (HFT).  Good trades executed swiftly are much better than “great” trades executed (or not executed!) after the market has already moved.   The second scenario is somewhat analogous to the problems of asset allocation and portfolio theory.  For example, if a poor or “incomplete” set of assets is supplied to any portfolio optimizer, results will be mediocre at best.  Just one example of right answer (portfolio optimization), wrong question (how to turn lead into gold).

I propose that the degree of fluctuation, or variance (or mean-return variance) is another partially-wrong question.  Perhaps incomplete is a better term.  Either way, not quite the right question.

Particularly if your portfolio is leveraged, what matters is portfolio semivariance.  If you believe that “markets can remain irrational longer than you can remain solvent”, leverage is involved.  Leverage via margin, or leverage via derivatives matters not.  Leverage is leverage.  At market close, “basic” 4X leverage means complete liquidation at a underlying loss of only 25%.  Downside matters.

Supposing a long-only position with leverage, modified semivariance is of key importance.  Modified, in my notation, means using zero rather than μ.  For one reason, solvency does not care about μ, mean return over an interval greater than insolvency.

The question at hand is what is the best predictor of future semivariance — past variance or past semivariance?  These papers make the case for semivariance:  “Good Volatility, Bad Volatility: Signed Jumps and the Persistence of Volatility” and “Mean-Semivariance Optimization: A Heuristic Approach“.

At the extreme, semivariance is most important factor for solvency… far more important than basic variance.  In terms of client risk-tolerance, actual semi-variance is arguably more important than variance — especially when financial utility is factored in.

Now, finally, to the crux of the issue.   It is far better to predict future semivariance than to predict future variance.  If it turns out that past (modified) semivariance is more predictive of future semivariance than is past variance, then I’d favor a near-optimal optimization of expected return versus semivariance than an perfectly-optimal expected return versus variance asset allocation.

It turns out that respectively optimizing semivariance is computationally many orders of magnitude more difficult that optimizing for variance.  It also turns out that Sigma1’s HAL0 software provides a near-optimal solution to the right question: least semivariance for a given expected return.

At the end of the day, at market close, I favor near-perfect semivariance optimization over “perfect” variance optimization.  Period.  Can your software do that?  Sigma1 financial software, code-named HAL0, can.  And that is just the beginning of what it can do.  HALo answers the right questions, with near-perfect precision.  And more precisely each day.

 

 

 

 

 

 

Financial Software Tech

In order to create software that is appealing to the enterprise market today, Sigma1 must create software for five years from now.   In this post I will answer the questions of why and how Sigma1 software intends to achieve this goal.

The goal of Sigma1 HAL0 software is to solve financial asset allocation problems quickly and efficiently.  HALo is portfolio-optimization software that makes use of a variety of proprietary algorithms.  HALo’s algorithms solve difficult portfolio problems quickly on a single-core computer, and much more rapidly with multi-core systems.

Savvy enterprise software buyers want to buy software that runs well on today’s hardware, but will also run on future generations of compute hardware.   I cannot predict all the trends for future hardware advanced, but I can predict one:  more cores.  Cores per “socket” are increasing on a variety of architectures:  Intel x86, AMD x86, ARM, Itanium, and IBM Power7 to name a few.  Even if this trend slows, as some predict, the “many cores” concept is here to stay and progress.

Simply put — Big Iron applications like portfolio-optimization and portfolio-risk management and modelling are archaic and virtually DOA if they cannot benefit from multi-core compute solutions.   This is why HAL0 is designed from day 1 to utilize multi-core (as well as multi-socket) computing hardware.  Multiprocessing is not a bolt-on retrofit, but an intrinsic part of HAL0 portfolio-optimization software.

That’s the why, now the how.  Google likes to use the phrase “map reduce”  while others like the phase embarrassingly parallel.   I like both terms because it can be embarrassing when a programmer discovers that the problems his software was slogging through in series were being solved in parallel by another programmer who mapped them to parallel sub-problems.

The “how” for HAL0’s core algorithm is multi-layered.   Some of these layers are trade secrets, I can disclose one.  Portfolio optimization involves creating an “efficient frontier” comprised of various portfolios along the frontier.  Each of these portfolios can be farmed out in parallel to evaluate its risk and reward values.   Depending on the parameters of a particular portfolio-optimization problem this first-order parallelism can provide roughly a 2-10x speedup — parallel, but not massively parallel.

HALo was developed under a paradigm I call CAP (congruent and parallel).  Congruent in this context means that given the same starting configuration, HAL0 will always produce the same result.  This is generally easy for single-threaded programs to accomplish, but often more difficult for programs running multiple threads on multiple cores.    Maintaining congruence is extremely helpful in debugging parallel software, and is thus very important to Sigma1 software.  [Coherent or Deterministic could be used in lieu of Congruent.]

As HAL0 development continued, I expanded the CAP acronym to CHIRP (Congruent, Heterogeneous, Intrinsically Recursively Parallel).   Not only does CHIRP have a more open, happier connotation that CAP, it adds two additional tenets:  heterogeneity and recursion.

Heterogeneity, in the context of CHIRP, means being able to run, in parallel, on a variety of machines will different computing capabilities.  On on end of the spectrum, rather than requiring all machines in the cloud or compute queue having the exact same specs (CPU frequency, amount of RAM, etc), the machines can be different.  On the other end of the spectrum, heterogeneity means running in parallel on multiple machines with different architectures (say x86 and ARM, or x86 and GPGPUs).  This is not to say that HAL0 has complete heterogeneous support; it does not.  HALo is, however, architected with modest support for heterogeneous solutions and extensibility for future enhancements.

The recursive part of CHIRP is very important.  Recursively parallel means that the same code can be run (forked) to solve sub-problems in parallel, and those sub-problems can be divided into sub-sub problems, etc.   This means that the same tuned, tight, and tested code can leveraged in a massively parallel fashion.

By far the most performance-enhancing piece of HAL0 portfolio-optimization CHIRP is RP.  The RP optimizations are projected to produce speedups of 50 to 100X over single-threaded performance (in a compute environment with, for example, 20 servers with 10 cores each).  Moreover, the RP parts of HAL0 only require moderate bandwidth and are tolerant of relatively high latency (say, 100 ms).

Bottom line:  HAL0 portfolio-optimization software is designed to be scalable and massively parallel.

 

 

 

Portfolio-Optimization Software: A Financial Software Suite for Power Users

When I’m not coding Sigma1 financial software, I’m often away from the keyboard thinking about it.  Lately I’ve been thinking about how to subdivide the HAL0 portfolio-optimization code into autonomous pieces.  There are many reasons to do this, but I will start by focusing on just one:  power users.

For this blog post, I’m going to consider Jessica, a successful 32 year-old proprietary trader.  Jessica is responsible for managing just over $500 million in company assets.  She has access to in-house research, and can call on the company’s analysts, quants, and researchers as needed and available.  Jessica also has a dedicated $500,000 annual technology budget, and she’s in the process of deciding how to spend it most effectively. She is evaluating financial software from several vendors.

Jessica is my target market for B2B sales.

In my electrical engineering career I have been responsible for evaluating engineering software.  Often I would start my search with 3 to 5 software products.  Due to time constraints, I would quickly narrow my evaluation to just two products.   Key factors in the early software vetting process were 1) ease of integration into our existing infrastructure and 2) ease of turn-on.  After narrowing the search to two, the criteria switched to performance, performance, performance, and flexibility… essentially a bake-off between the two products.

Ease of use and integration was initially critical because I (or others on my team) needed a product we could test and evaluate in-house.  I refused to make a final selection or recommendation based on vendor-provided data.  We needed software that we could get up and running quickly… solving our problems on our systems.  We’d start with small problems to become familiar with the products and then ramp up to our most challenging problems to stress and evaluate the software.

Assuming Jessica (and others like her) follow a similar approach to software purchases, HAL0 portfolio software has to get through both phases of product evaluation.

HAL0 software is optimized to “solve” simultaneously for 3 goals, normally:

  1. A risk metric
  2. A total-return metric
  3. An “x-factor” metric (often an orthogonal risk metric)

While being great for power users, the x-factor metric is also non-standard.  To help get through the “ease of start up” phase of product evaluation, I will likely default the x-factor to off.

Once the preliminary vetting is complete and final evaluation begins in earnest, performance and flexibility will be put to the test.  If, for instance, Jessica’s team has access to a proprietary risk-analysis widget utilizing GPGPUs that speeds up risk computation 100X over CPUs, HAL0 software can be configured to support it.  Because HAL0 is increasingly modular and componentized, Jessica’s resident quants can plug in proprietary components using relatively simple APIs.  Should competing products lack this plug-in capability, HAL0 software will have a massive advantage out of the starting gate.

When it comes to flexibility, component-based design wins hands down.  Proprietary risk metrics can be readily plugged in to HAL0 optimization software.  Such risk models can replace the default semi-variance model, or be incorporated as an adjunct risk metric in “x-space.”  Users can seed portfolio optimization with existing real and/or hypothetical portfolios, and HAL0 software will explore alternatives using user-selected risk and performance metrics.

HAL0 Portfolio-Optimization Basic Features

Out-of-the-box HAL0 software comes pre-loaded with semi-variance and variance risk metrics.  Historic or expected-return metrics are utilized for each security.  By default, the total return of any portfolio is computed as the weighted average of expected (or historic) total return of the securities that portfolio.  Naturally, leveraged and short-position weightings are supported as desired.

These basic features are provided for convenience and ease of setup.  While robust and and “battle-ready”, I do not consider them major value-add components.  The key value-proposition for HAL0 portfolio-optimization is its efficient, multi-objective engine.  HAL0 software is like a race car delivered ready to compete with a best-in-class engine.  All the components are race-ready, but some world-class race teams will buy the car simply to acquire the engine and chassis, and retrofit the rest.

Because HAL0 software is designed from the ground up to efficiently optimize large-asset-count portfolios using 3 concurrent objectives, switching to conventional 2-D optimization is child’s play.  Part of the basic line up of “x-space” metrics includes:

  • Diversification as measured against root-mean-square deviation from market or benchmark in terms of sector-allocation, market-cap allocation, or a weighted hybrid of both.
  • Quarterly, 1-year, 3-year, or 5-year worst loss.
  • Semi-variance (SV) or variance (V) — meaning concurrent optimization for both V and SV is possible.

Don’t “Think Different”, Be Different!

My primary target audience is professional investors who simply refuse to run with the herd.  They don’t seek difference for its own sake, but because they wish to achieve more.  HAL0 is designed to help ease its own learning curve by enabling users to quickly achieve the portfolio-optimization equivalent of “Hello, World!”, while empowering the power user to configure, tweak, and augment to virtually any desired extreme.