Inverted Risk/Return Curves

Over 50 years of academic financial thinking is based on a kind of financial gravity:  the notion that for a relatively diverse investment portfolio, higher risk translates into higher return given a sufficiently long time horizon.  Stated simply: “Risk equals reward.”  Stated less tersely, “Return for an optimized portfolio is proportional to portfolio risk.”

As I assimilated the CAPM doctrine in grad school, part of my brain rejected some CAPM concepts even as it embraced others.  I remember seeing a graph of asset diversification that showed that randomly selected portfolios exhibited better risk/reward profiles up to 30 assets, at which point further improvement was minuscule and only asymptotically approached an “optimal” risk/reward asymptote.  That resonated.

Conversely, strict CAPM thinking implied that a well-diversified portfolio of high-beta stocks will outperform a marketed-weighted portfolio of stocks over the long-term, albeit in a zero-alpha fashion.  That concept met with cognitive dissonance.

Now, dear reader, as a reward for staying with this post this far, I will reward you with some hard-won insights.  After much risk/reward curve fitting on compute-intensive analyses, I found that the best-fit expected-return metric for assets was proportional to the square root of beta.  In my analyses I defined an asset’s beta as 36-month, monthly returns relative to the benchmark index.  Mostly, for US assets, my benchmark “index” was VTI total-return data.

Little did I know, at the time, that a brilliant financial maverick had been doing the heavy academic lifting around similar financial ideas.  His name is Bob Haugen. I only learned of the work of this kindred spirit upon his passing.

My academic number crunching on data since 1980 suggested a positive, but decreasing incremental total return vs. increasing volatility (or for increasing beta).  Bob Haugen suggested a negative incremental total return for high-volatility assets above an inflection-point of volatility.

Mr. Haugen’s lifetime of  published research dwarfs my to-date analyses. There is some consolation in the fact that I followed the data to conclusions that had more in common with Mr. Haugen’s than with the Academic Consensus.

An objective analysis of the investment approach of three investing greats will show that they have more in common with Mr. Haugen than Mr. E.M. Hypothesis (aka Mr. Efficient Markets, [Hypothesis] , not to be confused with “Mr. Market”).  Those great investors are 1) Benjamin Graham, 2) Warren Buffet, 3) Peter Lynch.

CAPM suggests that, with either optimal “risk-free”or leveraged investments a capital asset line exists — tantamount to a linear risk-reward relationship. This line is set according to an unique tangent point to the efficient frontier curve of expected volatility to expected return.

My research at Sigma1 suggests a modified curve with a tangent point portfolio comprised, generally, of a greater proportion of low volatility assets than CAPM would indicate.  In other words, my back-testing at Sigma1 Financial suggests that a different mix, favoring lower-volatility assets is optimal.  The Sigma1 CAL (capital allocation line) is different and based on a different asset mix.  Nonetheless, the slope (first derivative) of the Sigma1 efficient frontier is always upward sloping.

Mr. Haugen’s research indicates that, in theory, the efficient frontier curve past a critical point begins sloping downward with as portfolio volatility increases. (Arguably the curve past the critical point ceases to be “efficient”, but from a parametric point it can be calculated for academic or theoretical purposes.)  An inverted risk/return curve can exist, just as an inverted Treasury yield curve can exist.

Academia routinely deletes the dominated bottom of the the parabola-like portion of the the complete “efficient frontier” curve (resembling a parabola of the form x = A + B*y^2) for allocation of two assets (commonly stocks (e.g. SPY) and bonds (e.g. AGG)).

Maybe a more thorough explanation is called for.   In the two-asset model the complete “parabola” is a parametric equation where x = Vol(t*A, (1-t)*B) and y = ER( t*A, (1-t)*B.  [Vol == Volatility or standard-deviation, ER = Expected Return)].   The bottom part of the “parabola” is excluded because it has no potential utility to any rational investor.  In the multi-weight model, x=minVol (W), y=maxER(W), and W is subject to the condition that the sum of weights in vector W = 1.  In the multi-weight, multi-asset model the underside is automatically excluded.  However there is no guarantee that there is no point where dy/dx is negative.  In fact, Bob Haugen’s research suggests that negative slopes (dy/dx) are possible, even likely, for many collections of assets.

Time prevents me from following this financial rabbit hole to its end.  However I will point out the increasing popularity and short-run success of low-volatility ETFs such as SPLV, USMV, and EEMV.  I am invested in them, and so far am pleased with their high returns AND lower volatilities.

==============================================

NOTE: The part about W is oversimplified for flow of reading.  The bulkier explanation is y is stepped from y = ER(W) for minVol(W) to max expected-return of all the assets (Wmax_ER_asset = 1, y = max_ER_asset_return), and each x = minVol(W) s.t. y = ER(W) and sum_of_weights(W) = 1.   Clear as mud, right?  That’s why I wrote it the other way first.

 

Advertisement

Beta Software, First Month

This marks the first month (30 days) of engagement with beta financial partners.  The goal is to test Sigma1 HAL0 portfolio-optimization software on real investment portfolios and get feedback from financial professionals.  The beta period is free.  Beta users provide tickers and expected-returns estimates via email, and Sigma1 provides portfolio results back with the best Sharpe, Sortino, or Sharpe/Sortino hybrid ratio results.

HAL0 portfolio-optimization software provides a set of optimized portfolios, often 40 to 100 “optimal” portfolios, optimized for expected return, return-variance and return-semivariance.   “Generic” portfolios containing a sufficiently-diverse set of ETFs produce similar-looking graphs.  A portfolio set containing SPY, VTI, BND, EFA, and BWX is sufficient to produce a prototypical graph.  The contour lines on the graph clearly show a tradeoff between semi-variance and variance.

 

Portfolio Optimization, Variance, Semi-Variance, and Total Return

Portfolio Optimization Graph

 

Once the set of optimized portfolios has been generated the user can select the “best” portfolio based on their selection criteria.

So far I have learned that many financial advisers and fund managers are aware of post-modern portfolio theory (PMPT) measures such as semivariance, but also a bit wary of them.  At the same time, some I have spoken with acknowledge that semivariance and parts of PMPT are the likely future of investing.  Portfolio managers want to be equipped for the day when one of their big investors asks, “What is the Sortino ratio of my portfolio? Can you reduce the semi-variance of my portfolio?”

I was surprised to hear that all of Sigma1 beta partners are interested exclusively in a web-based interface. This preliminary finding is encouraging because it aligns with a business model that protects Sigma1 IP from unsanctioned copying and reverse-engineering.

Another surprise has been the sizes of the asset sets supplied, ranging from 30 to 50 assets. Prior to software beta, I put significant effort into ensuring that HAL0 optimization could handle 500+ asset portfolios. My goal, which I achieved, was high-quality optimization of 500 assets in one hour and overnight deep-dive optimization (adding 8-10 basis points of additional expected-return for a given variance/semi-variance). On the portfolio assets provided to-date, deep-dive runtimes have all been under 5 minutes.

The best-testing phase has provided me with a prioritized list of software improvements. #1 is per-asset weighting limits. #2 is an easy-to-use web interface. #3 is focused optimization, such as the ability to set max variance.  There have also been company-specific requests that I will strive to implement as time permits.

Financial professionals (financial advisers, wealth managers, fund managers, proprietary trade managers, risk managers, etc.) seem inclined to want to optimize and analyze risk in both old ways (mean-return variance) and new (historic worst-year loss, VAR measures, tail risk, portfolio stress tests, semivariance, etc.).

Some Sigma1 beta partners have been hesitant to provide proprietary risk measure algorithms.  These partners prefer to use built-in Sigma1 optimizations, receive the resulting portfolios, and perform their own in-house analysis of risk.  The downside of this is that I cannot optimize directly to proprietary risk measures.  The upside is that I can further refine the HAL0 algos to solve more universal portfolio-optimization problems.  Even indirect feedback is helpful.

Portfolio and fund managers are generally happy with mean-return variance optimization, but are concerned that semivariance-return measures are reasonably likely to change the financial industry in the coming years.   Luckily the Sharpe ratio and Sortino ratio differ by only the denominator (σp versus σd) .  By normalizing the definitions of volatility (currently called modified-return variance and modified-return semivariance) HAL0 software optimizes simultaneously for both (modified) Sharpe and Sortino ratios, or any Sharpe/Sortino hybrid ratios in-between.  A variance-focused investor can use a 100% variance-optimized portfolio.  An investor wanting to dabble with semi-variance can explore portfolios with, say, a 70%/30% Sharpe/Sortino ratio.   And an investor, fairly bullish on semivariance minimization, could use a 20%/80% Sharpe/Sortino hybrid ratio.

I am very thankful to investment managers and other financial pros who are taking the time to explore the capabilities of HAL0 portfolio-optimization software.  I am hopeful that, over time, I can persuade some beta partners to become clients as HAL0 software evolves and improves.  In other cases I hope to provide Sigma1 partners with new ideas and perspectives on portfolio optimization and risk analysis.  Even in one short month, every partner has helped HAL0 software become better in a variety of ways.

Sigma1 is interested in taking on 1 or 2 additional investment professionals as beta partners.  If interested please submit a brief request for info on our contact page.

 

A Choice: Perfectly Wrong or Imperfectly Right?

In many situations good quick action beats slow brilliant action.   This is especially true when the “best” answer arrives too late.  The perfect pass is irrelevant after the QB is sacked, just as the perfect diagnosis is useless after the patient is dead.  Lets call this principle the temporal dominance threshold, or beat the buzzer.

Now imagine taking a multiple-choice test such as the SAT or GMAT.   Let’s say you got every question right, but somehow managed to skip question 7.   In the line for question #7 you put the answer to question #8, etc.   When you answer the last question, #50, you finally realize your mistake when you see one empty space left on the answer sheet… just as the proctor announces “Time’s up!”   Even thought you’ve answered every question right (except for question #7), you fail dramatically.   I’ll call this principle query displacement, or right answer/wrong question.

The first scenario is similar to the problems of high-frequency trading (HFT).  Good trades executed swiftly are much better than “great” trades executed (or not executed!) after the market has already moved.   The second scenario is somewhat analogous to the problems of asset allocation and portfolio theory.  For example, if a poor or “incomplete” set of assets is supplied to any portfolio optimizer, results will be mediocre at best.  Just one example of right answer (portfolio optimization), wrong question (how to turn lead into gold).

I propose that the degree of fluctuation, or variance (or mean-return variance) is another partially-wrong question.  Perhaps incomplete is a better term.  Either way, not quite the right question.

Particularly if your portfolio is leveraged, what matters is portfolio semivariance.  If you believe that “markets can remain irrational longer than you can remain solvent”, leverage is involved.  Leverage via margin, or leverage via derivatives matters not.  Leverage is leverage.  At market close, “basic” 4X leverage means complete liquidation at a underlying loss of only 25%.  Downside matters.

Supposing a long-only position with leverage, modified semivariance is of key importance.  Modified, in my notation, means using zero rather than μ.  For one reason, solvency does not care about μ, mean return over an interval greater than insolvency.

The question at hand is what is the best predictor of future semivariance — past variance or past semivariance?  These papers make the case for semivariance:  “Good Volatility, Bad Volatility: Signed Jumps and the Persistence of Volatility” and “Mean-Semivariance Optimization: A Heuristic Approach“.

At the extreme, semivariance is most important factor for solvency… far more important than basic variance.  In terms of client risk-tolerance, actual semi-variance is arguably more important than variance — especially when financial utility is factored in.

Now, finally, to the crux of the issue.   It is far better to predict future semivariance than to predict future variance.  If it turns out that past (modified) semivariance is more predictive of future semivariance than is past variance, then I’d favor a near-optimal optimization of expected return versus semivariance than an perfectly-optimal expected return versus variance asset allocation.

It turns out that respectively optimizing semivariance is computationally many orders of magnitude more difficult that optimizing for variance.  It also turns out that Sigma1’s HAL0 software provides a near-optimal solution to the right question: least semivariance for a given expected return.

At the end of the day, at market close, I favor near-perfect semivariance optimization over “perfect” variance optimization.  Period.  Can your software do that?  Sigma1 financial software, code-named HAL0, can.  And that is just the beginning of what it can do.  HALo answers the right questions, with near-perfect precision.  And more precisely each day.

 

 

 

 

 

 

Portfolio-Optimization Software: A Financial Software Suite for Power Users

When I’m not coding Sigma1 financial software, I’m often away from the keyboard thinking about it.  Lately I’ve been thinking about how to subdivide the HAL0 portfolio-optimization code into autonomous pieces.  There are many reasons to do this, but I will start by focusing on just one:  power users.

For this blog post, I’m going to consider Jessica, a successful 32 year-old proprietary trader.  Jessica is responsible for managing just over $500 million in company assets.  She has access to in-house research, and can call on the company’s analysts, quants, and researchers as needed and available.  Jessica also has a dedicated $500,000 annual technology budget, and she’s in the process of deciding how to spend it most effectively. She is evaluating financial software from several vendors.

Jessica is my target market for B2B sales.

In my electrical engineering career I have been responsible for evaluating engineering software.  Often I would start my search with 3 to 5 software products.  Due to time constraints, I would quickly narrow my evaluation to just two products.   Key factors in the early software vetting process were 1) ease of integration into our existing infrastructure and 2) ease of turn-on.  After narrowing the search to two, the criteria switched to performance, performance, performance, and flexibility… essentially a bake-off between the two products.

Ease of use and integration was initially critical because I (or others on my team) needed a product we could test and evaluate in-house.  I refused to make a final selection or recommendation based on vendor-provided data.  We needed software that we could get up and running quickly… solving our problems on our systems.  We’d start with small problems to become familiar with the products and then ramp up to our most challenging problems to stress and evaluate the software.

Assuming Jessica (and others like her) follow a similar approach to software purchases, HAL0 portfolio software has to get through both phases of product evaluation.

HAL0 software is optimized to “solve” simultaneously for 3 goals, normally:

  1. A risk metric
  2. A total-return metric
  3. An “x-factor” metric (often an orthogonal risk metric)

While being great for power users, the x-factor metric is also non-standard.  To help get through the “ease of start up” phase of product evaluation, I will likely default the x-factor to off.

Once the preliminary vetting is complete and final evaluation begins in earnest, performance and flexibility will be put to the test.  If, for instance, Jessica’s team has access to a proprietary risk-analysis widget utilizing GPGPUs that speeds up risk computation 100X over CPUs, HAL0 software can be configured to support it.  Because HAL0 is increasingly modular and componentized, Jessica’s resident quants can plug in proprietary components using relatively simple APIs.  Should competing products lack this plug-in capability, HAL0 software will have a massive advantage out of the starting gate.

When it comes to flexibility, component-based design wins hands down.  Proprietary risk metrics can be readily plugged in to HAL0 optimization software.  Such risk models can replace the default semi-variance model, or be incorporated as an adjunct risk metric in “x-space.”  Users can seed portfolio optimization with existing real and/or hypothetical portfolios, and HAL0 software will explore alternatives using user-selected risk and performance metrics.

HAL0 Portfolio-Optimization Basic Features

Out-of-the-box HAL0 software comes pre-loaded with semi-variance and variance risk metrics.  Historic or expected-return metrics are utilized for each security.  By default, the total return of any portfolio is computed as the weighted average of expected (or historic) total return of the securities that portfolio.  Naturally, leveraged and short-position weightings are supported as desired.

These basic features are provided for convenience and ease of setup.  While robust and and “battle-ready”, I do not consider them major value-add components.  The key value-proposition for HAL0 portfolio-optimization is its efficient, multi-objective engine.  HAL0 software is like a race car delivered ready to compete with a best-in-class engine.  All the components are race-ready, but some world-class race teams will buy the car simply to acquire the engine and chassis, and retrofit the rest.

Because HAL0 software is designed from the ground up to efficiently optimize large-asset-count portfolios using 3 concurrent objectives, switching to conventional 2-D optimization is child’s play.  Part of the basic line up of “x-space” metrics includes:

  • Diversification as measured against root-mean-square deviation from market or benchmark in terms of sector-allocation, market-cap allocation, or a weighted hybrid of both.
  • Quarterly, 1-year, 3-year, or 5-year worst loss.
  • Semi-variance (SV) or variance (V) — meaning concurrent optimization for both V and SV is possible.

Don’t “Think Different”, Be Different!

My primary target audience is professional investors who simply refuse to run with the herd.  They don’t seek difference for its own sake, but because they wish to achieve more.  HAL0 is designed to help ease its own learning curve by enabling users to quickly achieve the portfolio-optimization equivalent of “Hello, World!”, while empowering the power user to configure, tweak, and augment to virtually any desired extreme.

Benchmarking Financial Algorithms

In my last post I showed that there are far more that a googol permutations of portfolio of 100 assets with (positive, non-zero) weights in increments of 10 basis points, or 0.1%.    That number can be expressed as C(999,99), or C(999,900) or 999!/(99!*900!), or ~6.385*10138.  Out of sheer audacity, I will call this number Balhiser’s first constant (Kβ1).  [Wouldn’t it be ironic and embarrassing if my math was incorrect?]

In the spirit of Alan Turing’s 100th birthday today and David Hilbert’s 23 unsolved problems of 1900, I propose the creation of an initial set of financial problems to rate the general effectiveness of various portfolio-optimization algorithms.  These problems would be of a similar form:  each having a search space of Kβ1. There would be 23 initial problems P1…P23.  Each would have a series of 37 monthly absolute returns.  Each security will have an expected annualized 3-year return (some based on the historic 37-month returns, others independent).  The challenge for any algorithm A to score the best average score on these problems.

I propose the following scoring measures:  1) S”(A) (S double prime) which simply computes the least average semi-variance portfolio independent of expected return.  2) S'(A) which computes the best average semi-variance and expected return efficient frontier versus a baseline frontier.  3) S(A) which computes the best average semi-variance, variance, and expected return efficient frontier surface versus a baseline surface.  Any algorithm would be disqualified if any single test took longer than 10 minutes.  Similarly any algorithm would be disqualified if it failed to produce a “sufficient solution density and breadth” for S’ and S” on any test.  Obviously, a standard benchmark computer would be required.  Any OS, supporting software, etc could be used for purposes of benchmarking.

The benchmark computer would likely be a well-equipped multi-core system such as a 32 GB Intel  i7-3770 system.  There could be separate benchmarks for parallel computing, where the algorithm + hardware was tested as holistic system.

I propose these initial portfolio benchmarks for a variety of reasons.  1)  Similar standardized benchmarks have been very helpful in evaluating and improving algorithms in other fields such as electrical engineering.  2)  Providing a standard that helps separate statistically significant from anecdotal inference. 3)  Illustrate both the challenge and the opportunity for financial algorithms to solve important investing problems. 4)  Lowering barriers to entry for financial algorithm developers (and thus lowering the cost of high-quality algorithms to financial businesses).  5)  I believe HAL0 can provide superior results.

Greener Software is Better Software

CPU Load and Green Software

CPU Load Correlates with Power Consumption

Faster Software is Greener Software

Simply put, when one software product is more efficient than another, it runs faster and takes less time to solve the same problem.  The less time software takes to run, the less power is consumed.

By way of illustration, consider the efficiency of a steam ship going from New York to San Fransisco before and after the Panama Canal was built.  The canal was a technological marvel of its time, and it cut the journey distance from 13,000 miles to 5,000.  It cut travel time by (more than) half, and reduced the journey’s coal consumption by 50%.  The same work was performed, with the same “hardware” (the steamer), but in just 30 days rather than 60, and using half the fuel.

Faster run time is the most significant and most visible component of green software, but it is not the only significant factor.  Other factors affecting how much power software consumes include:

  • Cache miss rate
  • Streamlined versus bloated, crufty software
  • Use of best-suited hardware resources
  • Algorithm scalability

Without getting too technical, I’ll briefly touch on each bullet point.  A cache hit is when a CPU finds the information it needs in its internal cache memory, while a cache miss is when the CPU must send an off-chip request to the computer’s RAM to get the required data.  A cache miss is about 100x slower than a cache hit, in part because the data has to travel about 10cm for a cache miss, versus about 5mm for a cache hit.  The difference in power consumption between a cache hit and a cache miss easily be 20x to 100X, or more.

Most software starts out reasonably streamlined.  Later, if the software is popular, comes a time when enhancement requests and bug reports come in faster than developers can implement them in a streamlined manner.  Consequently many developers implement quick but inefficient fixes.  Often this behavior is encouraged by managers trying to hit aggressive schedule commitments.  The developers have intentions to come back and improve the code, but frequently their workload doesn’t permit that.  After a while developers forget where the software “kludges” or hack are.  Even worse, the initial developers either get reassigned to other projects or leave for other jobs.  The new software developers are challenged learn the unfamiliar code and implement fixes and enhancements — adding their own cruft.   This is how crufty, bloated software emerges:  overworked developers, focused on schedule over software efficiency, and developer turnover.

Modern CPUs have specialized instructions and hardware for different compute operations.  One example is Intel SSE technology which features a variety of Single-Instruction, Multiple-Data (SIMD) extensions.  For example, SSE4 (and AVX) can add 4 or more pairs of numbers (2 4-number vectors) in one operation, rather than 4 separate ADD operations.  This reduces CPU instruction traffic and saves power and time.

Finally algorithm scalability is increasingly important to modern computing and compute efficiency.  Scalability has many meanings, but I will focus on the ability of software to use multiple compute resources in parallel.  [Also known as parallel computing.]   Unfortunately most software in use today has limited or no compute-resource scalability.  This means that this software can only use 1 core of a modern 4-core CPU.  In contrast, linearly-scalable software could run 3x faster by using 3 of the 4 cores at full speed.  Even better, it could run 3x faster on 4 cores running at 75% speed, and consume about 30% less power.  [I’ll spare you the math, but if you are curious this link will get you started.]

“Distributed Software” is Greener

Distributed computing is technology that allows compute jobs to be distributed into the “cloud” or data center queue.  Rather than having desktop workstations sitting idle much of the day, a data center is a room full of computers that direct compute jobs to the least busy computers.  Jobs can be directed to the computers best-suited to a particular compute request. Intelligent data centers can even put unused computers into “deep sleep” mode that uses very little power.

I use the term distributed software to mean software that is easily integrated with a job-submission or queuing software infrastructure.  [Short for distributed-computing-capable software.]  Clearly distributed software benefits directly from the efficiencies of a given data center.  Distributed software can also benefit from the ability to run in parallel on multiple machines.  The more tightly-coupled with the capabilities and status of the data center, the more efficiently distributed software can adapt to dynamic changes.

Sigma1 Software is Green

Sigma1 financial software (code-named HAL0) has been designed from the ground up to be lean and green.  First and foremost, HAL0 (code-named in honor of Arthur C. Clarke’s HAL 9000 — “H-A-L is derived from Heuristic ALgorithmic (computer)”) is architected to scale near-linearly to tens or hundreds of cores, “sockets”, or distributed machines.  Second, the central kernel or engine is designed to be as light-weight and streamlined as possible — helping to reduce expensive cache misses.  Third, HAL0 uses Heuristic Algorithms and other “AI” features to efficiently navigate astronomically-large search spaces (10^18 and higher).  Fourth, HAL0 uses an innovative computation cache system that allows repeated complex computations to be looked up in the cache, rather than recomputed.  In alpha testing, this feature alone accounted for a 3X run-time improvement.  Finally, HAL0 portfolio software incorporates a number of more modest run-time and power-saving features such as coding vector operations explicitly as vector operations, thus allowing easier use of SIMD and possibly GPGPU instructions and hardware.

Some financial planners still use Microsoft Excel to construct and optimize portfolios.  This is slow and inefficient — to say the least.  Other portfolio software I have read about is an improvement over Excel, but doesn’t mention scalability nor heuristic algorithms.  It is possible, perhaps likely, that other financial software with some the capabilities of HAL0 exists.  I suspect, however, that if it does, it is proprietary, in-house software that is not for sale.

A Plea for Better, Greener Software

In closing, I’d like the software community to consider how the efficiency (or inefficiency) of their current software products contribute to world-wide power consumption.  Computer hardware has made tremendous strides to improving performance/power in the last ten years, and continues to do so.   IT and data-center technology is also becoming more power efficient.  Unfortunately, most software has been trending in the opposite direction — becoming more bloated and less efficient.  I urge software developers and software managers to consider the impact of the software they are developing.  I challenge you to consider, probably for the first time, how many kilowatt- or megawatt-hours your current software is likely to consume.  Then ask yourself, “How can I reduce that power?”

Toss your Financial Slide-rule: Beta Computation, MPT, and PMPT

Let me take you back to grad school for a few moments, or perhaps your college undergrad. If you’ve studied much finance, you’ve surely studied beta in the context of modern portfolio theory (MPT) and the Capital-Asset Pricing Model (CAPM). If you are a quant like me, you may have been impressed with the elegance of the theory. A theory that explains the value and risk of a security, not in isolation, but in the context of markets and portfolios.

Markowitz‘s MPT book, in the late 50’s, must have come as a clarion call to some investment managers.  Published ten years prior, Benjamin Graham’s The Intelligent Investor was, perhaps, the most definitive book of its time.  Graham’s book described an intelligent  portfolio as a roughly 50/50 stock/bond mix, where each stock or bond had been selected to provide a “margin of safety”.  Graham provided a value-oriented model for security analysis; Markowitz provided the tools for portfolio analysis.  Markowizt’s concept of  beta added another dimension to security analysis.

As I explore new frontiers of portfolio modeling and optimization, I like to occasionally survey the history of the evolving landscape of finance.  My survey lead me to put together a spreadsheet to compute β.  Here is the beta-computation spreadsheet.   The Excel spreadsheet uses three different methods to compute β, and they produce nearly identical results.  I used 3 years of weekly adjusted closing-price data for the computations.  R2 and α (alpha) are also computed.   The “nearly” part of identical gives me a bit of pause — is it simply round off, or are there errors?  Please let me know if you see any.

An ancient saying goes “Seek not to follow in the footsteps of men of old; seek what they sought.”   The path of “modern” portfolio theory leaves behind many footprints, including β and R-squared.  Today, the computation of these numbers is a simple academic exercise.  The fact that these numbers represent closed-form solutions (CFS) to some important financial questions has an almost irresistible appeal to many quantitative analysts and finance academics.   CFS were just the steps along the path;  the goal was building better portfolios.

Markowitz’s tools were mathematics, pencils, paper, a slide rule, and books of financial data.  The first handheld digital calculator wasn’t invented until 1967.  As someone quipped, “It’s not like he had a Dell computer on his desk.”   He used the mathematical tools of statistics developed more than 30 years prior to his birth.  A consequence of his environment is Markowitz’s (primary) definition of risk:  mean variance.  When first learning about mean-variance optimization (MVO), almost every astute learner eventually asks the perplexing question “So upside ‘risk’ counts the same as the risk of loss?”  In MTP, the answer is a resounding “Yes!”

The current year is 2012, and most sophisticated investors are still using tools developed during the slide-rule era.  The reason the MVO approach to risk feels wrong is because it simply doesn’t match the way clients and investors define risk.  Rather than adapt to the clients’ view of risk, most investment advisers, ratings agencies, and money managers ask the client to fill out a “risk tolerance” questionnaire that tries to map investor risk models into a handful of MV boxes.

MPT has been tweaked and incrementally improved by researchers like Sharpe and Fama and French — to name a few.  But the mathematically convenient MV definition of risk has lingered like a baseball pitcher’s nagging shoulder injury.  Even if this metaphorical “injury” is not career-ending, it can be career-limiting.

There is a better way, though it has a clunky name:  Post-Modern Portfolio Theory (PMPT).  [Clearly most quants and financial researchers are not good marketers… Next-Gen Portfolio Optimization, instead?]   The heart of PMPT can be summed up as “minimizing downside risk as measured by the standard deviation of negative returns.  “A good overview of PMPT in this Journal of Financial Planning Article.  This quote for that article stands out brilliantly:

Markowitz himself said that “downside semi-variance” would build better portfolios than standard deviation. But as Sharpe notes, “in light of the formidable computational problems…he bases his analysis on the variance and standard deviation.”

“Formidable computational problems” of 1959 are much less so today.  Financial companies are replete with processing power, data storage and computer networks.  In some cases developing efficient software to use certain PMPT concepts is easy, in other cases it can be extremely challenging.   (Please note the emphasis on the word ‘efficient’.   An financial algorithm that takes months to complete is unlikely to be of any practical use.)   The example Excel spreadsheet could easily be modified to compute a PMPT-inspired beta. [Hint:  =IF(C4>0, 0, C4)]

Are you ready step off the beaten path constructed 50 years ago by wise men with archaic tools?   To step onto the hidden path they might have blazed, if armed with powerful computer technology?  Click the link to start your journey on the one less traveled by.

The Business of Financial Business

Personally the easiest part of the financial software business is software development.  I have been involved with sales before and feel reasonably confident about this aspect of the business.  The primary challenge for me is marketing.

Sales is a face-to-face process.  Software development is either a solo process or a collaborative process usually involving a small group of developers.  Marketing is very different.  It is a one-to-many (or few-to-many) situation.  Striking a chord with the “many” is a perpetual challenge because the feedback is indirect and slow.  With marketing, I miss the face-to-face feedback and real-time personal interaction.

Knowing that marketing is not my strongest point, I have put extra effort into SEO, SEM, social media, and web marketing.  Over the past couple weeks I have purchased about 20 new domains.  Market and entrepreneurial research has shown me that a good idea, a good product, and a good domain name are not sufficient to achieve my business goals.  I realize that solid branding and trademarks are also important.

As a holder of 4 U.S. patents, I understand the importance of IP protection.  However, I am ideologically opposed to patents on software, algorithms, and “business processes.”   Therefore I feel that I must focus on branding, trademark protection, trade-secret protection, and copyright protection.

My redoubled marketing efforts have been exhausting and I hope they will pay off.  Next I plan to get back to software creation and refinement.

Portfolio Software: Day 8

Software development seems to inevitably take longer than scheduled.  I thought I’d have a working alpha model by “Day 4”, but it took me until “Day 7”.  Happily, yesterday my program produced its first algorithmically-generated portfolios.  These portfolios were generated from a small “universe” of stocks optimized using simple heuristics.  To test my new algorithm, I designed the two extremes of the search-space to have known solutions.  The solutions between the extremes along the test efficient frontier, however, have no obvious closed-form solutions.  One of the two trial portfolio heuristics is, by design, extremely non-linear as well as non-monotonic.

So far, on relatively small data sets, the run time is very good.  This is despite being coded in an interpreted language, and code that contains several known inefficiencies (such as repeating heuristic computations repeatedly on the same portfolio… caching will solve this particular issue).  I am now well positioned to begin refining the algorithms’ parameters and heuristics as well as to make run time improvement.

I have taken care, and extra time to build extensibility and testability (and of course revision control) into my Linux-based software development environment.  For example the portfolio software supports n-dimensions of analysis heuristics, not just 2 or 3.  Additionally the security selection space has no built-in limits.  Selecting from all listed, investable securities available on Earth is possible.  So long as the portfolio population is constrained (to say <= 100), the investable securities list can be very large.  Similarly, portfolios can contain many securities (1000+) without significant slow-down.

Regression testing can be a bit of a challenge with rand() being part of the of algorithm.  However srand() is very, very helpful in creating targeted software regression tests.  So far, I’ve been able to maintain regression-based testability for the entire program.

I also set aside some SEO, SEM, social media time on this project.  While the SEO and SEM efforts are very tedious, they are critical to building market awareness.  The social media aspect is somewhat more fun, and occasionally pays dividends that extend beyond the potential marketing benefits.

All in all I am relatively happy with the progress to date.  Sigma1 now has working, readable, extensible code for portfolio optimization.  The current software is pre-alpha, and very likely to have undiscovered bugs and numerous opportunities for efficiency and rate-of-convergence improvement.  At least I believe the code has arrived at the initial proof of concept stage.  This is only the “end of the beginning”.   Much work remains to improve the golden (Ruby) version of the software.  Once the Ruby code has sufficiently “gelled”, then begins the task of duplicating it with a C/C++ version.  I intend to refine both the Ruby and C/C++ versions so that they produce identical results in regression.   This will be a tedious process, but is extremely likely to find and squash subtle bugs.

Portfolio Software Development: Day 3

Portfolio Software: Plain English

Yesterday I wrote an early version of financial software to help users improve their investing portfolios.  This software has the ability to solve financial problems in a very different way than taught in graduate-level finance classes.   Rather than relying solely on a type of mathematics called statistical analysis, Sigma1 software uses techniques from computer science called artificial intelligence or AI.  (I prefer the term machine intelligence because there is nothing artificial about the intelligence results produced by a solid AI algorithm.  If you doubt this, I challenge you to beat Chessmaster 11 running on your PC… on max difficulty.)

My idea has been to develop a sophisticated program that would allow institutional investors such as fund managers to  “plug in” their proprietary valuation models and come up with solid portfolios in minutes or hours, rather than days or weeks using brute-force techniques.    As I was working, I realized that smaller “normal” investors could also benefit from a simplified version of Sigma1 software.

Rather than sell this lite version of portfolio-opt software I may provide a free version on a website.  The free version would have limitations on both the number of securities and the “depth” of analysis and reporting.  For example the user may only be able to enter a maximum of 20 securities in their current (or proposed) starting portfolio.  The free web version would quickly suggest an asset-allocation mix of those securities that is (potentially) safer with the same expected return or (potentially) equally safe with a higher expected return.

If the free web version is popular enough, Sigma1 may introduce a paid web subscription service that allows a larger portfolio, a wider selection of securities, more detailed reports and even sample portfolios to “blend” with the investor’s favorite tickers.

Even after the free web version is released, I plan to refine the advanced institutional version of the software.  I plan to use it to improve the composition of the Sigma1 proprietary trading fund.  I also intend to develop a world-class product that institutional investors will want to have access to… for a very reasonable price.

At this time I have zero interest in sharing the source code or specific concepts underlying the current and future Sigma1 software.  Many of these ideas stem from work in my undergrad engineering and computer science studies.  They have developed in my graduate work in finance and engineering.   The realization that the techniques I have developed for engineering, game-theory, poker and number theory apply most directly to portfolio construction and optimization hints at the possibility that I have hit upon one of those rare ideas that strike gold.  Not academic gold; real “gold” with real financial value.

I love academic research and open-source software.  I don’t intend to keep the concepts and code that Sigma1 is developing locked up forever.  If the Sigma1 financial software is financially successful enough, I hope to release pieces of it to the open-source community over time.  (Conversely, if the software does  not ultimately find a lucrative market, I will eventually release it too 🙂 )

Portfolio Optimization Software: Tech Speak

Yesterday I wrote the key pieces of an algorithm to build and optimize securities portfolios.   The remaining pieces: heuristics and selection should be relatively easy to code.   The coding and testing was very quick  1) because I’ve written similar optimizers many times before, 2) because I had 2 days to think about it as I was driving and 3) because I wrote it in Ruby.

Based on previous experience (and depending on the complexity of the heuristics), run-times should be swift for portfolios of 500 securities or less. In previous research I’ve been able to use distributed computing when the heuristics/analysis dominated run-time.  Generally the optimizer has not been the limiting factor for speed.

I plan to start with relatively simple heuristics to test the portfolio-optimization software.  Likely the first test will merely compute the (near-optimal) efficient frontier for a basket of securities, plotting 3-year standard deviation of various portfolios on the frontier versus expected return.  If I wish I may even compare the results to efficient frontiers constructed with classic methods using covariance matrices.

Once I create a Ruby prototype I plan to re-code the software in C/C++, both for execution speed and for the relative IP-protection provided by releasing only compiled binary executables.