For Posterity

I haven’t updated since sharing the news of Dave’s passing. I had several friends of ours, who were fellow engineers and developers, try to revive Dave’s dream with his portfolio software. Unfortunately, the task has proven insurmountable and we’ve decided to step away from the project now.

So, while the software will never see the light of day, I have decided to keep Dave’s blog up. For one obvious reason, I don’t have the heart to take it down. For another, I think many of his posts were intelligent and insightful, and I hope people will continue to find interest and meaning in them.

Thank you to everyone who supported his vision.

Moving Forward

For those of you who may have been following the Sigma 1 blog for the past few years, you may be surprised by the sudden rebranding and name change. A lot has happened in 2016 that has precipitated these changes.

Firstly, I am sad to inform you that the founder of Sigma 1, Dave Balhiser, passed away in January of 2016. Secondly, I would like to introduce myself. I am Gabi Endress-Balhiser, and I was Dave’s wife and partner. I have decided to not let the work my husband put into his portfolio optimization product go to waste, so I have partnered with some of his team and we are working on continuing to develop the tool and bring it to market, hopefully sometime in 2017.

Admittedly it has been a difficult year. Dave’s brilliant mind and kind heart are missed by many. I watched him put hundreds of hours into developing this software. It was his passion and he truly believed it was revolutionary. The back-end of the product was rock solid before he passed, and the only missing element was a front-end to make it user-friendly. I have a background in usability and I have managed teams of developers in the past, so I am hoping I can help my husband’s dream come to fruition.

I am admittedly not a Quant by any stretch of the imagination, but if there is an interest I may invite guest bloggers to contribute to this site as we continue to move forward.

Thank you to everyone who supported my husband in his quest to build a revolutionary portfolio optimization software. I hope I will have your support along the way as well.

A Better Robo Advisory

Building a Better Robo Advisor

The more we learned about the current crop of robo advisory firms, the more we realized we could do better. This brief blog post hits the high points of that thinking.

Not Just the Same Robo Advisory Technology

It appears that all major robo advisory companies use 50+ year-old MPT (modern portfolio theory). At Sigma1 we use so-called post-modern portfolio theory (PMPT) that is much more current. At the heart of PMPT is optimizing return versus semivariance. The details are not important to most people, but the takeaway is the PMPT, in theory, allows greater downside risk mitigation and does not penalize portfolios that have sharp upward jumps.

Robo advisors, we infer, must use some sort of Monte Carlo analysis to estimate “poor market condition” returns. We believe we have superior technology in this area too.

Finally, while most robo advisory firms offer tax loss harvesting, we believe we can 1) set up portfolios that do it better, 2) go beyond just tax loss harvesting to achieve greater portfolio tax efficiency.

I Robo: The Rise of the Robo Advisor

Think Ahead About Your Role in a Robo Advisory World

Financial innovation is here and it is here to stay.  Financial advisors, broker/dealers, hybrids, and even financial planners should be thinking about how to adapt to inevitable changes launched by disruptive investing technologies.

Robo Design — Chip designers have been using it for decades

I have an unique perspective on technological disruption.  For over ten years, my job was to develop software to make microchip designers more productive. Another way of describing my work was to replace microchip design tasks done by humans with software. In essence, my job was to put some chip designers out of work. My role was called (digital circuit) design automation, or DA.

In reality my work and the work of software design automation engineers like myself resulted in making designers faster and more productive — able to develop larger chips with roughly the same number of design engineers.

Robo Advisors: Infancy now, but growing very fast!

“The robos are coming, the robos are coming!” It’s true. Data though the end of 2014 shows that robo advisors managed $19 billion in assets with a 65% growth rate in just eight short months. This is essentially triple-digit growth, annual doubling.  $19 billion (likely $30 billion now), is just a drop in the bucket now… but with firms like Vanguard and Schwab already developing and rolling out robo advising option of their own these crazy growth rates are sustainable for a while.

With total US assets under management (AUM) exceeding $34 trillion, an estimated $30 billion for robo advisors represents less than 0.1% of managed assets.  If, however, robo advisors grow double their managed assets annually for the next five years that amounts to about 3% of total AUM management by robo advisors. If in the second five years the robo advisory annual grow rate slows to 50% that still mean that robo advisors will control in the neighborhood of 20% of managed assets by 2025.

“Robo-Shields” and Robo Friends

Deborah Fox was clever enough to coin and trademark the term “robo-shield.” The basic idea is for traditional (human) investment advisors to protect their business by offering robo-like services ranging from client access to their online data to tax harvesting. I call this the half-robo defense

Another route to explore is the “robo friends”, or “full robo-hybrid” approach. This is partnering with an internal or external robo advisor.  As an investment advisor, the robo advisor is subservient to you, and provides portfolio allocation and tax-loss harvesting, while you focus on the client relationship.  I believe that the “robo friends” model will win over the pure robo advising model — most people prefer to have someone to call when they have investment questions or concerns, and they like to have relationships with their human advisors. We shall see.

What matters most is staying abreast of the robo advisor revolution and having a plan for finding a place in the brave new world of robo advising.


Semivariance Excel Example

The most in-demand topic on this blog is for an Excel semivariance example. I have posted mathematical semivariance formulas before, but now I am providing a description of exactly how to compute semivariance in “vanilla” Excel… no VBA required.

The starting point is row D. Cell D$2 contains average returns of over the past 36 months. The range D31:D66 contains those returns.  Thus the contents of D$2 are simply:


This leads us to the semivariance formula:


We will now examine each building block of this formula starting with


We only want to measure “dips” below the mean return. For all the observations that “dip” below the mean we take the square of the dip, otherwise we return zero. Obviously this is a vector operation, the IF function returns a vector of values.

Next we divide the resulting vector by the number of observations (months) minus 1. We can simply COUNT the number of observations with COUNT(D31:D66-1).  [NOTE 1: The minus 1 means we are taking the semivariance of a sample, not a population. NOTE 2: We could just as easily taken the division “outside” the SUM — the result is the same either way.]

Next is the SUM. The following formula is the monthly semivariance of our returns in row D:


You’ll notice the added curly braces around this formula. This specifies that this formula should be treated as a vector (matrix) operation.  The curly braces allow this formula to stand alone.  The way the curly braces are applied to a vector (or matrix) formula is to hit <CTRL><SHIFT><ENTER> rather than just <ENTER>. Hitting <CTRL><SHIFT><ENTER> is required after every edit.

We now have monthly semivariance. If we wanted annual semivariance we could simply multiply by 12.

Often, however, we ultimately want annual semi-deviation (also called semi-standard deviation) for computing things like Sortino ratios, etc. Going up one more layer in the call stack brings us to the SQRT operation, specifically:


This is monthly (downside) semi-deviation. We are just one step away from computing annual semi-deviation. That step is multiplying by SQRT(12), which brings us back to the big full formula.

There it is in a nutshell. You now have the formulas to compute semivariance and semi-deviation in Excel.



Quant Cross-Training

A very astute professor of finance told our graduate finance class that the best way to become a bona fide quant is NOT to get a Ph.D. in Finance!  It is better, he said, to get a Ph.D. in statistics, applied mathematics, or even physics. Why? Because a Ph.D in Finance is generally not sufficiently quantitative. A quant needs a strong background in Stochastic Calculus.

“Quants for Hire?”

Our company has been described as a “quants for hire” firm. That is flattering. While we currently have 4 folks with master’s of science degrees (and one close to finishing a master’s) what we do is probably more accurately described as “quant-like” or “quant-lite” software and services. However “Quants for Hire” definitely has a nice succinct ring to it.

Quant-like Tangents to Financial Learning

Most of our quant-like work has been fairly vanilla — back testing trading strategies in Excel, Monte Carlo simulations (also in Excel), factor analysis, options strategy analysis. So far our clients like Excel and are not very interested in R. The main application of R has been to double-check our Excel back tests!

We have attracted fairly sophisticated clients.  They seem reasonably comfortable about talking about viewing portfolios as unit vectors that can be linearly combined.  They tend to understand correlation matrices, Sortino ratios, and in some cases even relate to partial derivatives and gradients. But they tend to push back on explanations involving geometric Brownian motion, Ito’s lemma, and the finer points of  Black-Scholes-Merton. They do, however, appear to appreciate that we “know our stuff.”

I’ve got a decent set of R skills, but I’m looking to take them to the next level. I’m taking a page from my professor in tackling non-financial quantitative problems. My current problem du jour image compression. I came up with an R script that achieves very high compression levels for lossy compression.  It is shorter than 200 lines commented and shorter than 100 lines when stripped of comments and blank (formatting) lines.

It can easily achieve 20X or greater compression, albeit with a loss in quality. In my initial tests my R algorithm (IC_DXB1.1) was somewhat comparable to JPEG (GIMP 2.8) at 20X compression, though I the JPEG clearly looks better in general. I also found an elegant R compressor that is extremely compact R code… the kernel is about 5 lines! Let’s call this SVD (singular value decomposition) for reference. So here’s the bake off results (all ~20X compressed to ~1.5KB):

JPEG:                                                             IC_DXB1.1:

20X Compressed with JPEG

JPEG 20X Compressed

20X Compressed with IC_DXB1.1

IC_DXB1.1 20X Compressed

20X Compressed with SVD in R

SVD 20X Compressed










What’s interesting to me is that each algorithm uses radically different approaches. JPEG uses DCT (discrete cosine transform) plus a frequency “mask” or filter that reduces more and more high-frequency components to achieve compression. My ic_dxb1.1 algorithm uses a variant of B-splines. The SVD approach uses singular value decomposition from linear algebra.

Obviously tens of thousands of hours have been invested in JPEG encoding. And, unfortunately, 99%+ of JPEG images are not as compact as they could be due to a series of patent disputes around arithmetic coding. Even thought the patents have all (to the best of my knowledge) expired, there is simply too much inertia behind the alternative Huffman coding at the present. It is worth noting that my analysis of all 3 algorithms is based on Huffman coding for consistency.  All three approaches could ultimately use either Huffman or arithmetic coding.


So this Image Stuff Relates to Finance How?

Another of my professors explained that, fundamentally, finance is about information. One set of financial interview questions start with the premise that you have immediate (light-speed, real-time) access to all public information. Generally how would you make use of this information to make money trading? Alternatively you are to assume (correctly) that information costs money… how would your prioritize your firm’s information access?  How important is frequency and latency?

Having boat loads of real-time data and knowing what to do with it are two different things. I use R to back test strategies, because it easy to write readable R code with a low bug rate. If I had to implement those strategies in a high-frequency trading environment, I would not use R, I would likely use C or C++. R is fast compared to Excel (maybe 5X faster), but is slow compared to good C/C++ implementations (often 100X slower).

My thinking is that while knowledge is important, so is creativity. By dabbling in areas outside of my “realm of expertise”, I improve my knowledge while simultaneously exercising my creativity.

Both signal processing and quant finance can reasonably be viewed as signal processing problems. Signal processing and information theory are closely related. So I would argue that developing skills in one area is cross-training skills in the other… and with greater opportunity for developing creativity. Finance is inextricably linked to information.

The Future of Finance Requires Disruptive (Software) Technology

It aint gonna be pretty for traditional financial advisors, hybrid advisors, broker/dealers, etc. Not with the rapid market acceptance of robo advisors.

Robo advising will have at least three important disruptive impacts:

  1. Accelerating downward pressure on advisory fees
  2. Taking of market share and AUM
  3. Increasing market demand for investment tax management services such as tax-loss harvesting

Are you ready for the rise of the bots? We at Sigma1 are, and we are looking forward to it. That is because we believe we have the software and skills to make robo advisors work better. And we are not resting on our laurels — we are focusing our professional development on software, computer science, advanced mathematics, information theory, and the like.

Dividends and Tax-Optimal Investing

The previous post showed after-tax results of a hypothetical 8% return portfolio. The primary weakness in this analysis was a missing bifurcation of return: dividends versus capital gains.

The analysis in this post adds the missing bifurcation. It is instructive to compare the two results. This new analysis accounts for the qualified dividends and assumes that these dividends are reinvested. It is an easy mistake to assume that since the qualified dividend rate is identical to the capital gains rate, that dividends are equivalent to capital gains on a post-tax basis. This assumption is demonstrably false.

Tax Efficiency with dividends.

Tax Efficiency with dividends.

Though both scenarios model a net 8% annual pre-tax return, the “6+2” model (6% capital appreciation, 2% dividend) shows a lower 6.98% after-tax return for the most tax-efficient scenario versus a 7.20% after-tax return for the capital-appreciation-only model. (The “6+2” model assumes that all dividends are re-invested post-tax.)

This insight suggests an interesting strategy to potentially boost total after-tax returns. We can assume that our “6+2” model represents the expected 30-year average returns for a total US stock market index ETF like VTI, We can deconstruct VTI into a value half and a growth half. We then put the higher-dividend value half in a tax-sheltered account such as an IRA, while we leave the lower-dividend growth half in a taxable account.

This value/growth split only produces about 3% more return over 30 years, an additional future value of $2422 per $10,000 invested in this way.

While this value/growth split works, I suspect most investors would not find it to be worth the extra effort. The analysis above assumes that the growth half is “7+1” model.  In reality the split costs about 4 extra basis points of expense ratio — VTI has a 5 bps expense ratio, while the growth and value ETFs all have 9 bps expense ratios. This cuts the 10 bps per year after-tax boost to only 6 bps. Definitely not worth the hassle.

Now, consider the ETF Global X SuperDividend ETF (SDIV) which has a dividend yield of about 5.93%. Even if all of the dividends from this ETF receive qualified-dividend tax treatment, it is probably better to hold this ETF in a tax-sheltered account. All things equal it is better to hold higher yielding assets in a tax-sheltered account when possible.

Perhaps more important is to hold assets that you are likely to trade more frequently in a tax-sheltered account and assets that you are less likely to trade in a taxable account. The trick then is to be highly disciplined to not trade taxable assets that have appreciated (it is okay to sell taxable assets that have declined in value — tax loss harvesting).

The graph shows the benefits of long-term discipline on after-tax return, and the potential costs of a lack of trading discipline. Of course this whole analysis changes if capital gains tax rates are increased in the future — one hopes one will have sufficient advanced notice to take “evasive” action.  It is also possible that one could be blindsided by tax raising surprises that give no advanced notice or are even retroactive! Unfortunately there are many forms of tax risk including the very real possibility of future tax increases.