Portfolio Risks: Risk Analysis, Optimization and Management

With news like JPMorgan losing $9 billion dollars in a quarter due to trading losses, it’s no wonder that risk management software is seen as increasingly important.  I appears that the highest level executives have no clue how to assess the risks that their traders are taking on.  No clue, that is, until they are side-swiped by massive losses.

To begin to fathom the risk exposure from proprietary-trading (and hedging) it is necessary to have near-real-time data for the complete portfolio of securities, derivatives, and other financial positions and obligations.  This is a herculean, but achievable task for more vanilla securities positions such as long and short positions in stocks, bonds, ETNs, options and futures.  All of these assets have standardized tickers, trading rules, and essentially zero counterparty risk.  Further these financial assets have thorough, easily-accessible, real-time data for price, volume, bid and ask.  Even thinly traded assets like many option contracts have sufficient data to at least estimate their current liquidation value with tolerable uncertainty (say +/- 10%).

OTC trades, contracts, and obligations pose a much greater challenge for risk managers.  Lets think about credit-default swaps on Greek bonds.  Believe it or not there is uncertainty over the definition of “default”.  If European banks agreed to take a 50% haircut on Greek debt, does that constitute a default.  Most accounts I have read say no.  So even if a savvy European bank hedged its Greek bond exposure with CDS contracts, they lose.  Their hedge really wasn’t.

Sigma1 doesn’t (currently) attempt to assess risk for exotic OTC contracts and obligations.  What Sigma1 HAL0 software does do is better model standardized financial asset portfolios.  A tag line for HAL0 software could be “Risk: Better Modelling, Sounder Sleep”.

My goal is to continuously improve risk management and risk optimization in the following ways:

  1. Risk models that are more robust and intuitive.
  2. Enhanced risk visualization.  Taking the abstract and making it visible
  3. Optimizing downside risk (minimizing downside risk) with sophisticated heuristic algorithms.

I prefer the term “optimize” (in most contexts) to “minimize” or “maximize” because it is clear what optimize means.  Naturally portfolio optimization means finding the efficient frontier of minimized risk returns (or return-maximized risks).  Either way optimization usually involves concurrent minimization and maximization of various objective functions.

HAL0 portfolio optimization is best suited for optimizing the following types of funds and portfolios  1) individual investment portfolios, 2) endowment portfolios, 3) pension funds, 4) insurance company portfolios, 5) traditional (non-investment bank) bank portfolios, 6) company investment portfolios (including bond obligations).

While the core HAL0 optimization algorithm is designed to optimize more than 3 objective functions, I have been increasingly focused on optimizing for 3 concurrent objectives.   In the most common usage model, I envision one expected return function, one risk function, a third objective function.   The third objective function can be another risk model, diversification metric, investment-style metric or any other quantitative measure.

For example, HAL0 can optimize from a pool of 500 investments to create a 3D efficient frontier surface.  The z axis is, by convention, always the expect return.  The x axis is generally the primary risk measure, such as 3-year monthly semivariance.  The y axis, depth, can be another risk measure such as worst 5-year quarterly return.

Looking at this surface gives perspective on the tradeoffs between the various return and risk metrics.  It is particularly elucidating to plot a point representing one’s current investment pool or portfolio.  If it is on the surface, it is optimal (or near optimal).  However, if it is under the surface it is sub-optimal.  Either way, looking north, south, east, or west show the nearby alternatives — trading of various risks and rewards.

So the nascent marketer in me asks:  Can your financial optimization software optimize and display in 3 dimensions?  Can it optimize non-standard functions (such as worst-case quarterly return over 5 years)?  Is your current portfolio optimization software written from the ground up to be specifically optimized for financial optimization challenges?

HAL0 is.   It is the financial software that I would buy (and will personally use) to optimize my financial portfolio.  It is so compelling that it is the first project that is causing me to seriously consider quitting my day job with excellent benefits, vacation, and a six-figure salary for.   To borrow a baseball analogy software development and finance are in my wheelhouse.  I am considering giving up the comfort and security of a solid job in electrical engineering to pursue my dream and my truest talents.  Many in my industry would “kill” for my current position.  To me it feels largely intellectually unchallenging. In contrast, developing and enhancing HAL0 has taken every spare ounce of my creativity, knowledge, and passion.  In essence, HAL0 is a labor of love.

I passionately want to redefine financial risk.  I also want to modestly redefine financial return.  I see the current financial model and flawed in major and minor (yet significant) ways and hope to reinvent it.   It’s about leveraging the best of the past (Markowitz’s core ideas including semivariance) and the best of the now (fast, networked, parallel compute technology).  To accomplish this requires great software, the beta version of which, called HAL0, is residing on my Linux server.

Advertisement

Benchmarking Financial Algorithms

In my last post I showed that there are far more that a googol permutations of portfolio of 100 assets with (positive, non-zero) weights in increments of 10 basis points, or 0.1%.    That number can be expressed as C(999,99), or C(999,900) or 999!/(99!*900!), or ~6.385*10138.  Out of sheer audacity, I will call this number Balhiser’s first constant (Kβ1).  [Wouldn’t it be ironic and embarrassing if my math was incorrect?]

In the spirit of Alan Turing’s 100th birthday today and David Hilbert’s 23 unsolved problems of 1900, I propose the creation of an initial set of financial problems to rate the general effectiveness of various portfolio-optimization algorithms.  These problems would be of a similar form:  each having a search space of Kβ1. There would be 23 initial problems P1…P23.  Each would have a series of 37 monthly absolute returns.  Each security will have an expected annualized 3-year return (some based on the historic 37-month returns, others independent).  The challenge for any algorithm A to score the best average score on these problems.

I propose the following scoring measures:  1) S”(A) (S double prime) which simply computes the least average semi-variance portfolio independent of expected return.  2) S'(A) which computes the best average semi-variance and expected return efficient frontier versus a baseline frontier.  3) S(A) which computes the best average semi-variance, variance, and expected return efficient frontier surface versus a baseline surface.  Any algorithm would be disqualified if any single test took longer than 10 minutes.  Similarly any algorithm would be disqualified if it failed to produce a “sufficient solution density and breadth” for S’ and S” on any test.  Obviously, a standard benchmark computer would be required.  Any OS, supporting software, etc could be used for purposes of benchmarking.

The benchmark computer would likely be a well-equipped multi-core system such as a 32 GB Intel  i7-3770 system.  There could be separate benchmarks for parallel computing, where the algorithm + hardware was tested as holistic system.

I propose these initial portfolio benchmarks for a variety of reasons.  1)  Similar standardized benchmarks have been very helpful in evaluating and improving algorithms in other fields such as electrical engineering.  2)  Providing a standard that helps separate statistically significant from anecdotal inference. 3)  Illustrate both the challenge and the opportunity for financial algorithms to solve important investing problems. 4)  Lowering barriers to entry for financial algorithm developers (and thus lowering the cost of high-quality algorithms to financial businesses).  5)  I believe HAL0 can provide superior results.

The Equation that Will Change Finance

Two mathematical equations have transformed the world of modern finance.  The first was CAPM, the second Black-Scholes.  CAPM gave a new perspective on portfolio construction.  Black-Scholes gave insight into pricing options and other derivatives.  There have been many other advancements in the field of financial optimization, such as Fama-French — but CAPM and Black-Scholes-Merton stand out as perhaps the two most influential.

Enter Semi-Variance

Modified Semi-Variance Equation

Modified Semi-Variance Equation, A Financial Game Changer

When CAPM (and MPT) were invented, computers existed, but were very limited.  Though the father of CAPM, Harry Markowitz, wanted to use semi-variance, the computers of 1959 were simply inadequate.  So Markowitz used variance in his ground breaking book “Portfolio Selection — Efficient Diversification of Investments”.

Choosing variance over semi-variance made the computations orders of magnitude easier, but the were still very taxing to the computers of 1959.  Classic covariance-based optimizations are still reasonably compute-intensive when a large number of assets are considered.  Classic optimization of a 2000 asset portfolio starts by creating a 2,002,000-entry (technically 2,002,000 unique entries which, when mirrored about the shared diagonal, number 4,000,000) covariance matrix; that is the easy part.  The hard part involves optimizing (minimizing) portfolio variance for a range of expected returns.  This is often referred to as computing the efficient frontier.

The concept of semi-variance (SV) is very similar to variance used in CAPM.  The difference is in the computation.  A quick internet search reveals very little data about computing a “semi-covariance matrix”.  Such a matrix, if it existed in the right form, could possibly allow quick and precise computation of portfolio semi-variance in the same way that a covariance matrix does for computing portfolio variance.  Semi-covariance matrices (SMVs) exist, but none “in the right form.” Each form of SVM has strengths and weaknesses. Thus, one of the many problems with semi-covariance matrices is that there is no unique canonical form for a given data set.  SVMs of different types only capture an incomplete portion of the information needed for semi-variance optimization.

The beauty of SV is that it measures “downside risk”, exclusively.  Variance includes the odd concept of “upside risk” and penalizes investments for it.  While not  going to the extreme of rewarding upside “risk”, the modified semi-variance formula presented in this blog post simply disregards it.

I’m sure most of the readers of this blog understand this modified semi-variance formula.  Please indulge me while I touch on some of the finer points.   First, the 2 may look a bit out of place.  The 2 simply normalizes the value of SV relative to variance (V).  Second, the “question mark, colon” notation simply means if the first statement is true use the squared value in summation, else use zero.  Third, notice I use ri rather than ri – ravg.

The last point above is intentional and another difference from “mean variance”, or rather “mean semi-variance”.  If R is monotonically increasing during for all samples (n intervals, n+1 data points), then SV is zero.  I have many reasons for this choice.  The primary reason is that with  ravg the SV for a straight descending R would be zero.  I don’t want a formula that rewards such a performance with 0, the best possible SV score.  [Others would substitute T, a usually positive number, as target return, sometimes called minimal acceptable return.]

Finally, a word about r— ri is the total return over the interval i.  Intervals should be as uniform as possible.  I tend to avoid daily intervals due to the non-uniformity introduced by weekends and holidays.  Weekly (last closing price of the trading week), monthly (last closing price of the month), and quarterly are significantly more uniform in duration.

Big Data and Heuristic Algorithms

Innovations in computing and algorithms are how semi-variance equations will change the world of finance.  Common sense is why. I’ll explain why heuristic algorithms like Sigma1’s HALO can quickly find near-optimal SV solutions on a common desktop workstation, and even better solutions when leveraging a data center’s resources.  And I’ll explain why SV is vastly superior to variance.

Computing SV for a single portfolio of 100 securities is easy on a modern desktop computer.  For example 3-year monthly semi-variance requires 3700 multiply-accumulate operations to compute portfolio return, Rp, followed by a mere 37 subtractions, 36 multiplies (for squaring), and 36 additions (plus multiplying by 2/n).  Any modern computer can perform this computation in the blink of an eye.

Now consider building a 100-security portfolio from scratch.  Assume the portfolio is long-only and that any of these securities can have a weight between 0.1% and 90% in steps of 0.1%.  Each security has 900 possible weightings.  I’ll spare you the math — there are 6.385*10138 permutations. Needless to say, this problem cannot be solved by brute force.  Further note that if the portfolio is turned into a long-short portfolio, where negative values down to -50% are allowed, the search space explodes to close to 102000.

I don’t care how big your data center is, a brute force solution is never going to work.  This is where heuristic algorithms come into play.  Heuristic algorithms are a subset of metaheuristics.  In essence heuristic algorithms are algorithms that guide heuristics (or vise versa) to find approximate solution(s) to a complex problem.   I prefer the term heuristic algorithm to describe HALO, because in some cases it is hard to say whether a particular line of code is “algorithmic” or “heuristic”, because sometimes the answer is both.  For example, semi-variance is computed by an algorithm but is fundamentally a heuristic.

Heuristic Algorithms, HAs, find practical solutions for problems that are too difficult to brute force.  They can be configured to look deeper or run faster as desired by the user.  Smarter HAs can take advantage of modern computer infrastructure by utilizing multiple threads, multiple cores, and multiple compute servers in parallel.  Many, such as HAL0, can provide intermediate solutions as they run far and deep into the solution space.

Let me be blunt — If you’re using Microsoft Excel Solver for portfolio optimization, you’re missing out.  Fly me out and let me bring my laptop loaded with HAL0 to crunch your data set — You’ll be glad you did.

Now For the Fun Part:  Why switch to Semi-Variance?

Thanks for reading this far!  Would you buy insurance that paid you if your house didn’t burn down?   Say you pay $500/year and after 10 years, if your house is still standing, you get $6000. Otherwise you get $0. Ludicrous, right?  Or insurance that only “protects” your house from appreciation?  Say it pays 50 cents for every dollar make when you resell your house, but if you lose money on the resale you get nothing?

In essence that is what you are doing when you buy (or create) a portfolio optimized for variance.   Sure, variance analysis seeks to reduce the downs, but it also penalizes the ups (if they are too rapid).  Run the numbers on any portfolio and you’ll see that SV ≠ V.  All things equal, the portfolios with SV < V are the better bet. (Note that classic_SV ≤ V, because it has a subset of positive numbers added together compared to V).

Let me close with a real-world example.  SPLV is an ETF I own.  It is based on owning the 100 stocks out of the S&P 500 with the lowest 12-month volatility.  It has performed well, and been received well by the ETF marketplace, accumulating over $1.5 billion in AUM.  A simple variant of SPLV (which could be called PLSV for PowerShares Low Semi-Variance) would contain the 100 stocks with the least SV.  An even better variant would contain the 100 stocks that in aggregate produced the lowest SV portfolio over the proceeding 12 months.

HALO has the power to construct such a portfolio. It could solve preserving the relative market-cap ratios of the 100 stocks, picking which 100 stocks are collectively optimal.  Or it could produce a re-weighted portfolio that further reduced overall semi-variance.

[Even more information on semi-variance (in its many related forms) can be found here.]

 

Green Software for Finance

When I Google for “green software” top searches are about software that helps other activities become greener, such as utility power optimization or HVAC optimization.  This makes sense because “smart power” has a bigger footprint than compute power consumption.  Other search results focus on green IT (information technology).

I would like to contribute to the dialog about green software itself.  My definition of green technologies (green software, green hardware) is technology that consumes less power while producing comparable or better results.  My introduction to green technology began with power-efficient hardware design 7 years ago.  I learned that power savings equals performance improvement due to thermodynamic and other considerations.  This mantra (less power equals more performance) is gradually transforming semiconductor design.  Technology companies that understand this best (Intel, ARM, Samsung, Google) stand to benefit in the years ahead.

In general, for every watt of power consumed by compute hardware, a watt or more of building cooling power is required.  Software’s true power consumption is about 2 times that of the compute power it consumes.  Compute power includes system power (CPU, RAM, driver, peripherals, and power supply losses) plus power for networking (routers, switches, etc) plus other power consumers like network-attached storage.

Some of the electrical engineering compute jobs I run take a week running 24×7 to complete, and often run on multiple CPUs and on multiple computers.  Each compute job easily consumes an average of 1 Kilowatt of compute power, hence an average of 2 KW of data center power. This works out to about 335 KW*h per run.  This is about $33 worth of power and is enough to power the average home’s electric needs (not counting heating and cooling) for about 8 days.

Right now the “greenness” of software is a relative.  Today the software development world doesn’t have the right models to compare whether, say, particular database software is more or less green than particular financial software.  Software and IT professionals can, however, assess whether one specific portfolio optimization solution is more or less green than another.

Creating green software begins with asking the right questions.  The fundamental questions are “How much power does our software consume, and how can we reduce it?”  I started to ask myself these questions early in the development stage of HAL0 financial software.  I realized that the software running fairly large computations on the same data over and over again.  Computations like the 3-year volatility of an asset.  I created a simple software cache mechanism that first checked if the exact complex computation had been performed before.  If it had the cache simply returned the previous result.  If not the computation was performed and the result was saved in the cache.  The result was a 3X speed up and an approximately 3x improvement in performance per watt for the HAL0 portfolio-optimization software.  The mantra that “power saved is performance gained” is even more true in the world of software.

In other words, green software design practices and focusing on faster software often lead to the same types of software improvement.  The thought process to arrive at software improvement can be different enough to give software developers new perspectives on their algorithms and code.  I found that some solutions that eluded me while looking for performance-enhancement (speed ups) were easily discover by thinking about power and resource inefficiencies.  Similarly some software improvement that came quickly from profiling performance data would have been unlikely to occur to me when thinking about green software methods; it is only in retrospect that I saw their performance per watt benefit.  The concept of “green software” is complimentary with other software concepts such lightweight software, rapid prototyping, and time-complexity analysis and optimization.

HAL0 portfolio-optimization software is designed to be green.  It is designed to get more done with less power consumption.  Like all other green software, greener means faster.  Some of HAL0 speedups have come from thinking green, and other haves come from “thinking fast.”  The speed and efficiency of HAL0’s core engine is high, but I already envision further improvements of 5 to 10X.  It is simply a question of time to implement them.