Beyond Numbers. The IQ and Qi of Investing.

Numbers, facts, analysis — this is how I am trained to evaluate and decide.  This approach works well in the world of engineering and physical science.  However, I’ve found that in the domain of investing, quantitative methods sometimes function best as a means keeping grounded and objective; subjective approaches can be extremely effective and profitable.

The  cognitive process of investing can roughly subdivided into 3 categories:

  1. Intuitive (‘I’).  This portion is essentially driven by emotion, feelings, hunches.   Thoughts and choices are arrived at, not by inductive nor deductive reasoning, but by other means.  Cognitive science suggests that this process is achieved by the brains massive neural nets weighing massive stores of perceived information and arriving at conclusions based on past experience and training.
  2. Quasi-objective reasoning (‘Q’) mixes intuitive thinking with top-down reasoning  — idea first.  It also encompasses bottom-up reasoning (guided by intuition) — data first.  This approach blends analytical and intuitive thinking.
  3. Numerical/analytical (‘N’).  This approach focuses on quantitative empirical data.  Numbers are crunched and conclusions are obvious.  The difficult part is in ensuring both the inputs to the “number crunching” and the mechanisms of the “number crunching” are accurate.

I’ll use the shorthand I,Q,N to denote the processes enumerated above.   In my experiences, most well-examined financial decisions follow the basic pattern I→Q→N→Q→I or I→Q→I.  In other words, most financial decisions begin and end with intuitive (A.K.A. gut-level) thinking.  If a “disciplined investing approach” is strictly employed, a truncated →Q (or →Q→N→Q)  is added to the end of the process, where pre-determined “rules” are used to vet investment elections against predetermined suitability criteria.

So far my highest-return investment decisions have been I1→Q1→I2Q2→N1→Q3 decisions.  The first red part is the time-consuming part (in aggregate) because many ideas are discarded during the Q1 and I2 steps.  The quicker green part is the sanity and scale check. Q2 frames the decision, N1 crunches the numbers, and Q3 evaluates the outcome.  If the investment is deemed sound it is then scaled appropriately based on risk and value-at-risk ratios, otherwise it is discarded, or in some cases revised and re-evaluated.

Sigma1 Financial software can play an important role in the Q and N steps of the investment decision process.  The Q process can be either data-first or idea-first.  Sigma1 Financial also excels in the N process, imposing objectivity and performing the numerical heavy lifting.  What Sigma1 Financial software cannot nor is ever likely to do is participate directly in the I steps.  ‘I’ steps remain solidly aligned with the human element of the investment decision process.

The choice of letters ‘I’ and ‘Q’ is deliberate.  The investment IQ of investors is one important component to successful investing.  Higher investment IQs tend to result in superior investment returns.  Similarly, Qi is also critical to long-term investing success.  ‘N’ is used because it is neutral and disconnected.  Numerical and disciplined analytical methods provide ballast against the classic investing emotions of fear and greed (as well as unbridled enthusiasm and despair).

Investing IQ and Qi are brought to the table by financial professionals, while financial software provides powerful enhancements to Q, N, and especially QN and NQ, the areas separate from IQ and Qi.

The IQN domain is continuous, not discrete.  ‘I’ defines one edge which begins to blend into ‘Q’.   ‘N’ defines the opposite edge which blends with the other side of ‘Q’.  Q resides in the middle, merging aspects of I and N.

When investors understand IQN concepts, it helps to remove emotion from investment decisions, while acknowledging the importance of intuition.  IQN concepts also help demonstrate where and how financial software and analysis tools integrate with the investing process.

Advertisement

Variance, Semivariance Convergence

In running various assets through portfolio-optimization software, I noticed that for an undiversified set of assets there can be wide differences between portfolios with the highest Sharpe ratios versus portfolios with the Sortino ratios.  Further, if the efficient frontier of ten portfolios is constructed (based on mean-variance optimization) and sorted according to both Sharpe and Sortino ratios the ordering is very different.

If, however, the same analysis is performed on a globally-diversified set of assets the portfolios tend to converge.  The broad ribbon of of the 3-D efficient surface seen with undiversfied assets narrows until it begins to resemble a string arching smoothly through space.  The Sharpe/Sortino ordering becomes very similar with ranks seldom differing by more than 1 or 2 positions.  Portfolios E and F may rank 2 and 3 in the Sharpe ranking but rank      2 and 1 in the Sortino ranking, for example.

Variance/Semivariance divergence is wider for optimized portfolios of individual stocks.  When sector-based stock ETFs are used instead of individual stocks, the divergence narrows.  When bond- and broad-based index ETFs are optimized, the divergence narrows to the point that it could be considered by many to be insignificant.

This simple convergence observation has interesting ramifications.  First, a first-pass of faster variance optimization can be applied, followed by a slower semivariance-based refinement to more efficiently achieve a semivariance-optimized portfolio.  Second, semivariance distinctions can be very significant for non-ETF (stock-picking) and less-diversified portfolios.  Third, for globally-diversified, stock/bond, index-EFT-based portfolios, the differences between variance-optimized and semivariance-optimized portfolios are extremely subtle and minute.

 

 

Portfolio Risks: Risk Analysis, Optimization and Management

With news like JPMorgan losing $9 billion dollars in a quarter due to trading losses, it’s no wonder that risk management software is seen as increasingly important.  I appears that the highest level executives have no clue how to assess the risks that their traders are taking on.  No clue, that is, until they are side-swiped by massive losses.

To begin to fathom the risk exposure from proprietary-trading (and hedging) it is necessary to have near-real-time data for the complete portfolio of securities, derivatives, and other financial positions and obligations.  This is a herculean, but achievable task for more vanilla securities positions such as long and short positions in stocks, bonds, ETNs, options and futures.  All of these assets have standardized tickers, trading rules, and essentially zero counterparty risk.  Further these financial assets have thorough, easily-accessible, real-time data for price, volume, bid and ask.  Even thinly traded assets like many option contracts have sufficient data to at least estimate their current liquidation value with tolerable uncertainty (say +/- 10%).

OTC trades, contracts, and obligations pose a much greater challenge for risk managers.  Lets think about credit-default swaps on Greek bonds.  Believe it or not there is uncertainty over the definition of “default”.  If European banks agreed to take a 50% haircut on Greek debt, does that constitute a default.  Most accounts I have read say no.  So even if a savvy European bank hedged its Greek bond exposure with CDS contracts, they lose.  Their hedge really wasn’t.

Sigma1 doesn’t (currently) attempt to assess risk for exotic OTC contracts and obligations.  What Sigma1 HAL0 software does do is better model standardized financial asset portfolios.  A tag line for HAL0 software could be “Risk: Better Modelling, Sounder Sleep”.

My goal is to continuously improve risk management and risk optimization in the following ways:

  1. Risk models that are more robust and intuitive.
  2. Enhanced risk visualization.  Taking the abstract and making it visible
  3. Optimizing downside risk (minimizing downside risk) with sophisticated heuristic algorithms.

I prefer the term “optimize” (in most contexts) to “minimize” or “maximize” because it is clear what optimize means.  Naturally portfolio optimization means finding the efficient frontier of minimized risk returns (or return-maximized risks).  Either way optimization usually involves concurrent minimization and maximization of various objective functions.

HAL0 portfolio optimization is best suited for optimizing the following types of funds and portfolios  1) individual investment portfolios, 2) endowment portfolios, 3) pension funds, 4) insurance company portfolios, 5) traditional (non-investment bank) bank portfolios, 6) company investment portfolios (including bond obligations).

While the core HAL0 optimization algorithm is designed to optimize more than 3 objective functions, I have been increasingly focused on optimizing for 3 concurrent objectives.   In the most common usage model, I envision one expected return function, one risk function, a third objective function.   The third objective function can be another risk model, diversification metric, investment-style metric or any other quantitative measure.

For example, HAL0 can optimize from a pool of 500 investments to create a 3D efficient frontier surface.  The z axis is, by convention, always the expect return.  The x axis is generally the primary risk measure, such as 3-year monthly semivariance.  The y axis, depth, can be another risk measure such as worst 5-year quarterly return.

Looking at this surface gives perspective on the tradeoffs between the various return and risk metrics.  It is particularly elucidating to plot a point representing one’s current investment pool or portfolio.  If it is on the surface, it is optimal (or near optimal).  However, if it is under the surface it is sub-optimal.  Either way, looking north, south, east, or west show the nearby alternatives — trading of various risks and rewards.

So the nascent marketer in me asks:  Can your financial optimization software optimize and display in 3 dimensions?  Can it optimize non-standard functions (such as worst-case quarterly return over 5 years)?  Is your current portfolio optimization software written from the ground up to be specifically optimized for financial optimization challenges?

HAL0 is.   It is the financial software that I would buy (and will personally use) to optimize my financial portfolio.  It is so compelling that it is the first project that is causing me to seriously consider quitting my day job with excellent benefits, vacation, and a six-figure salary for.   To borrow a baseball analogy software development and finance are in my wheelhouse.  I am considering giving up the comfort and security of a solid job in electrical engineering to pursue my dream and my truest talents.  Many in my industry would “kill” for my current position.  To me it feels largely intellectually unchallenging. In contrast, developing and enhancing HAL0 has taken every spare ounce of my creativity, knowledge, and passion.  In essence, HAL0 is a labor of love.

I passionately want to redefine financial risk.  I also want to modestly redefine financial return.  I see the current financial model and flawed in major and minor (yet significant) ways and hope to reinvent it.   It’s about leveraging the best of the past (Markowitz’s core ideas including semivariance) and the best of the now (fast, networked, parallel compute technology).  To accomplish this requires great software, the beta version of which, called HAL0, is residing on my Linux server.

Benchmarking Financial Algorithms

In my last post I showed that there are far more that a googol permutations of portfolio of 100 assets with (positive, non-zero) weights in increments of 10 basis points, or 0.1%.    That number can be expressed as C(999,99), or C(999,900) or 999!/(99!*900!), or ~6.385*10138.  Out of sheer audacity, I will call this number Balhiser’s first constant (Kβ1).  [Wouldn’t it be ironic and embarrassing if my math was incorrect?]

In the spirit of Alan Turing’s 100th birthday today and David Hilbert’s 23 unsolved problems of 1900, I propose the creation of an initial set of financial problems to rate the general effectiveness of various portfolio-optimization algorithms.  These problems would be of a similar form:  each having a search space of Kβ1. There would be 23 initial problems P1…P23.  Each would have a series of 37 monthly absolute returns.  Each security will have an expected annualized 3-year return (some based on the historic 37-month returns, others independent).  The challenge for any algorithm A to score the best average score on these problems.

I propose the following scoring measures:  1) S”(A) (S double prime) which simply computes the least average semi-variance portfolio independent of expected return.  2) S'(A) which computes the best average semi-variance and expected return efficient frontier versus a baseline frontier.  3) S(A) which computes the best average semi-variance, variance, and expected return efficient frontier surface versus a baseline surface.  Any algorithm would be disqualified if any single test took longer than 10 minutes.  Similarly any algorithm would be disqualified if it failed to produce a “sufficient solution density and breadth” for S’ and S” on any test.  Obviously, a standard benchmark computer would be required.  Any OS, supporting software, etc could be used for purposes of benchmarking.

The benchmark computer would likely be a well-equipped multi-core system such as a 32 GB Intel  i7-3770 system.  There could be separate benchmarks for parallel computing, where the algorithm + hardware was tested as holistic system.

I propose these initial portfolio benchmarks for a variety of reasons.  1)  Similar standardized benchmarks have been very helpful in evaluating and improving algorithms in other fields such as electrical engineering.  2)  Providing a standard that helps separate statistically significant from anecdotal inference. 3)  Illustrate both the challenge and the opportunity for financial algorithms to solve important investing problems. 4)  Lowering barriers to entry for financial algorithm developers (and thus lowering the cost of high-quality algorithms to financial businesses).  5)  I believe HAL0 can provide superior results.

The Equation that Will Change Finance

Two mathematical equations have transformed the world of modern finance.  The first was CAPM, the second Black-Scholes.  CAPM gave a new perspective on portfolio construction.  Black-Scholes gave insight into pricing options and other derivatives.  There have been many other advancements in the field of financial optimization, such as Fama-French — but CAPM and Black-Scholes-Merton stand out as perhaps the two most influential.

Enter Semi-Variance

Modified Semi-Variance Equation

Modified Semi-Variance Equation, A Financial Game Changer

When CAPM (and MPT) were invented, computers existed, but were very limited.  Though the father of CAPM, Harry Markowitz, wanted to use semi-variance, the computers of 1959 were simply inadequate.  So Markowitz used variance in his ground breaking book “Portfolio Selection — Efficient Diversification of Investments”.

Choosing variance over semi-variance made the computations orders of magnitude easier, but the were still very taxing to the computers of 1959.  Classic covariance-based optimizations are still reasonably compute-intensive when a large number of assets are considered.  Classic optimization of a 2000 asset portfolio starts by creating a 2,002,000-entry (technically 2,002,000 unique entries which, when mirrored about the shared diagonal, number 4,000,000) covariance matrix; that is the easy part.  The hard part involves optimizing (minimizing) portfolio variance for a range of expected returns.  This is often referred to as computing the efficient frontier.

The concept of semi-variance (SV) is very similar to variance used in CAPM.  The difference is in the computation.  A quick internet search reveals very little data about computing a “semi-covariance matrix”.  Such a matrix, if it existed in the right form, could possibly allow quick and precise computation of portfolio semi-variance in the same way that a covariance matrix does for computing portfolio variance.  Semi-covariance matrices (SMVs) exist, but none “in the right form.” Each form of SVM has strengths and weaknesses. Thus, one of the many problems with semi-covariance matrices is that there is no unique canonical form for a given data set.  SVMs of different types only capture an incomplete portion of the information needed for semi-variance optimization.

The beauty of SV is that it measures “downside risk”, exclusively.  Variance includes the odd concept of “upside risk” and penalizes investments for it.  While not  going to the extreme of rewarding upside “risk”, the modified semi-variance formula presented in this blog post simply disregards it.

I’m sure most of the readers of this blog understand this modified semi-variance formula.  Please indulge me while I touch on some of the finer points.   First, the 2 may look a bit out of place.  The 2 simply normalizes the value of SV relative to variance (V).  Second, the “question mark, colon” notation simply means if the first statement is true use the squared value in summation, else use zero.  Third, notice I use ri rather than ri – ravg.

The last point above is intentional and another difference from “mean variance”, or rather “mean semi-variance”.  If R is monotonically increasing during for all samples (n intervals, n+1 data points), then SV is zero.  I have many reasons for this choice.  The primary reason is that with  ravg the SV for a straight descending R would be zero.  I don’t want a formula that rewards such a performance with 0, the best possible SV score.  [Others would substitute T, a usually positive number, as target return, sometimes called minimal acceptable return.]

Finally, a word about r— ri is the total return over the interval i.  Intervals should be as uniform as possible.  I tend to avoid daily intervals due to the non-uniformity introduced by weekends and holidays.  Weekly (last closing price of the trading week), monthly (last closing price of the month), and quarterly are significantly more uniform in duration.

Big Data and Heuristic Algorithms

Innovations in computing and algorithms are how semi-variance equations will change the world of finance.  Common sense is why. I’ll explain why heuristic algorithms like Sigma1’s HALO can quickly find near-optimal SV solutions on a common desktop workstation, and even better solutions when leveraging a data center’s resources.  And I’ll explain why SV is vastly superior to variance.

Computing SV for a single portfolio of 100 securities is easy on a modern desktop computer.  For example 3-year monthly semi-variance requires 3700 multiply-accumulate operations to compute portfolio return, Rp, followed by a mere 37 subtractions, 36 multiplies (for squaring), and 36 additions (plus multiplying by 2/n).  Any modern computer can perform this computation in the blink of an eye.

Now consider building a 100-security portfolio from scratch.  Assume the portfolio is long-only and that any of these securities can have a weight between 0.1% and 90% in steps of 0.1%.  Each security has 900 possible weightings.  I’ll spare you the math — there are 6.385*10138 permutations. Needless to say, this problem cannot be solved by brute force.  Further note that if the portfolio is turned into a long-short portfolio, where negative values down to -50% are allowed, the search space explodes to close to 102000.

I don’t care how big your data center is, a brute force solution is never going to work.  This is where heuristic algorithms come into play.  Heuristic algorithms are a subset of metaheuristics.  In essence heuristic algorithms are algorithms that guide heuristics (or vise versa) to find approximate solution(s) to a complex problem.   I prefer the term heuristic algorithm to describe HALO, because in some cases it is hard to say whether a particular line of code is “algorithmic” or “heuristic”, because sometimes the answer is both.  For example, semi-variance is computed by an algorithm but is fundamentally a heuristic.

Heuristic Algorithms, HAs, find practical solutions for problems that are too difficult to brute force.  They can be configured to look deeper or run faster as desired by the user.  Smarter HAs can take advantage of modern computer infrastructure by utilizing multiple threads, multiple cores, and multiple compute servers in parallel.  Many, such as HAL0, can provide intermediate solutions as they run far and deep into the solution space.

Let me be blunt — If you’re using Microsoft Excel Solver for portfolio optimization, you’re missing out.  Fly me out and let me bring my laptop loaded with HAL0 to crunch your data set — You’ll be glad you did.

Now For the Fun Part:  Why switch to Semi-Variance?

Thanks for reading this far!  Would you buy insurance that paid you if your house didn’t burn down?   Say you pay $500/year and after 10 years, if your house is still standing, you get $6000. Otherwise you get $0. Ludicrous, right?  Or insurance that only “protects” your house from appreciation?  Say it pays 50 cents for every dollar make when you resell your house, but if you lose money on the resale you get nothing?

In essence that is what you are doing when you buy (or create) a portfolio optimized for variance.   Sure, variance analysis seeks to reduce the downs, but it also penalizes the ups (if they are too rapid).  Run the numbers on any portfolio and you’ll see that SV ≠ V.  All things equal, the portfolios with SV < V are the better bet. (Note that classic_SV ≤ V, because it has a subset of positive numbers added together compared to V).

Let me close with a real-world example.  SPLV is an ETF I own.  It is based on owning the 100 stocks out of the S&P 500 with the lowest 12-month volatility.  It has performed well, and been received well by the ETF marketplace, accumulating over $1.5 billion in AUM.  A simple variant of SPLV (which could be called PLSV for PowerShares Low Semi-Variance) would contain the 100 stocks with the least SV.  An even better variant would contain the 100 stocks that in aggregate produced the lowest SV portfolio over the proceeding 12 months.

HALO has the power to construct such a portfolio. It could solve preserving the relative market-cap ratios of the 100 stocks, picking which 100 stocks are collectively optimal.  Or it could produce a re-weighted portfolio that further reduced overall semi-variance.

[Even more information on semi-variance (in its many related forms) can be found here.]

 

Seeking a Well-Matched Angel Investor (Part I)

Most of the reading I have done regarding angel investing suggests that finding the right “match” is a critical part of the process.  This process is not just about a business plan and a product, it is also about people and personalities.

Let me attempt to give some insight into my entrepreneurial personality.  I have been working (and continue to work) in a corporate environment for 15 years. Over that time I have received a lot of feedback.  Two common themes emerge from that feedback.  1)  I tend to be a bit too “technical”.  2)  I tend to invest more effort on work that I like.

Long Story about my Tech Career

Since I work in the tech industry, being too technical at first didn’t sound like something I should work on.  I eventually came to understand that this wasn’t feedback from my peers, but from managers.   Tech moves so fast that many managers simply do not keep up with these changes except in the most superficial ways.  (Please note I say many, not most).  While being technical is my natural tendency, I have learned to adjust the technical content to suite the  composition of the meeting room.

The second theme has been a harder personal challenge.  Two general areas I love are technical challenges and collaboration.  I love when there is no “smartest person in the room” because everybody is the best at at least one thing, if not many.  When a team like that faces a new critical issue — never before seen — magic often occurs.  To me this is not work; it is much closer to play.

I have seen my industry, VLSI and microprocessor design, evolve and mature.  While everyone is still the “smartest person in the room”, the arrival of novel challenges is increasingly rare.   We are increasingly challenged to become masters of execution rather than masters of innovation.

Backing up a bit, when I started at Hewlett-Packard, straight out of college, I had the best job in the world, or darn near.  For 3-4 months I “drank from a fire hose” of knowledge from my mentor.  After just 6 months I was given what, even in retrospect, was tremendous responsibilities (and a nice raise).  I was put in charge of integrating “logic synthesis” software into the lab’s compute infrastructure.  When I started, about 10% of the lab’s silicon area was created via synthesis; when I left 8 years later about 90% of the lab’s silicon was created via logic synthesis.  I was part of that transformation, but I wasn’t the cause — logic synthesis was simply the next disruptive technology in the industry.

So why did change companies?  I was developing software to build advanced “ASICs”.  First the company moved ASIC manufacturing overseas, then increasingly ASIC hardware design.  The writing was on the wall… ASIC software development would eventually move.  So I made a very difficult choice and moved into microprocessor software development.  Looking back now, this was the likely the best career choice I have ever made.

Practically overnight I was again “drinking from a fire hose.”   Rather than working with software, my former teammates and I had built from scratch, I was knee-deep in poorly-commented code that been abandoned by all but one of the original developers.  In about 9 months my co-developer and I had transformed this code into something that resembled properly-architected software.

Again, I saw the winds of change transforming my career environment: this time, microprocessor design.  Software development was moving from locally-integrated hardware/software design labs to a centralized software-design organization.  Seeing this shift, I moved within the company, to microprocessor hardware design.  Three and a half years later I see the pros and cons of this choice.  The largest pro is having about 5 times more opportunities in the industry — both within the company, and without.  The largest con, for me, is dramatically less software development work.  Hardware design still requires some software work, perhaps, 20-25%.  Much of this software design, however, is very task-specific.  When the task is complete — perhaps after a week or a month — it is obsolete.

A Passion for Software and Finance

While I was working, I spent some time in grad school. I took all the EE classes that related to VLSI and microprocessor design. The most interesting class was an open-ended research project. The project I chose, while related directly to microprocessor design, had a 50/50 mix of software design and circuit/device-physics research. I took over the software design work, and my partner took on most of the other work. The resulting paper was shortened and revised (with the help of our professor and third grad student) and accepted for presentation at the 2005 Society of Industrial and Applied Mathematics (SIAM) Conference in Stockholm, Sweden.  Unfortunately, none of us where able to attend due to conflicting professional commitments.

Having exhausted all “interesting” EE/ECE courses, I started taking grad school courses in finance.  CSU did not yet have a full-fledged MSBA in Financial Risk Management program, but it did offer a Graduate Certificate in Finance, which I earned.  Some research papers of note include “Above Board Methods of Hedging Company Stock Option Grants” and “Building an ‘Optimal’ Bond Portfolio including TIPS.”

Software development has been an interest of mine since I took a LOGO summer class in 5th grade.  It has been a passion of mine since I taught myself “C” in high school.  During my undergrad in EE, I took enough CS electives to earn a Minor in Computer Science along with my BSEE.   Almost all of my elective CS courses centered around algorithms and AI.   Unlike EE, which at times I found very challenging, I found CS courses easy and fun.  That said, I earned straight A’s in college, grad and undergrad, with one exception: I got a B- in International Marketing.  Go figure.

My interest in finance started early as well.  I had a paper route at the age of 12, and a bank account.  I learned about compound interest and was hooked.  With help from my Dad, and still 12 years old, I soon had a money market account and long-maturity zero-coupon bond.  My full-fledged passion for finance developed when I was issued my first big grant of company stock options.  I realized I knew quite a bit about stocks, bonds, CD’s and money market funds, but I knew practically nothing about options.  Learning about options was the primary reason I started studying Finance in grad school.  I was, however, soon to learn about CAPM and MPT, and portfolio construction and optimization.  Since then, trying to build the “perfect” portfolio has been a lingering fascination.

Gradually, I began to see flaws in MPT and the efficient-markets hypothesis (EMH).  Flaws that Markowitz acknowledged from the beginning!  [Amazing what you can learn from going beyond textbooks, and back to original sources.]   I read in some depth about the rise and demise of Long-Term Capital Management.  I read about high-frequency trading methods and algorithms.  I looked into how options can be integrated into long-term portfolio-building strategies.  And finally, I started researching the ever-evolving field of Post-Modern Portfolio Theory (PMPT.)

When I finally realized how I could integrate my software development skills, my computer science (AI) background, my graduate EE/ECE work and my financial background into a revolutionary software product, I was thunderstruck. I can and did build the alpha version of this product, HAL0, and it works even better than I expected.  If I can turn this product into a robust business, I can work on what I like, even what I love.  And that passion will be a strength rather than a “flaw”.   Send me an angel!

 

Toss your Financial Slide-rule: Beta Computation, MPT, and PMPT

Let me take you back to grad school for a few moments, or perhaps your college undergrad. If you’ve studied much finance, you’ve surely studied beta in the context of modern portfolio theory (MPT) and the Capital-Asset Pricing Model (CAPM). If you are a quant like me, you may have been impressed with the elegance of the theory. A theory that explains the value and risk of a security, not in isolation, but in the context of markets and portfolios.

Markowitz‘s MPT book, in the late 50’s, must have come as a clarion call to some investment managers.  Published ten years prior, Benjamin Graham’s The Intelligent Investor was, perhaps, the most definitive book of its time.  Graham’s book described an intelligent  portfolio as a roughly 50/50 stock/bond mix, where each stock or bond had been selected to provide a “margin of safety”.  Graham provided a value-oriented model for security analysis; Markowitz provided the tools for portfolio analysis.  Markowizt’s concept of  beta added another dimension to security analysis.

As I explore new frontiers of portfolio modeling and optimization, I like to occasionally survey the history of the evolving landscape of finance.  My survey lead me to put together a spreadsheet to compute β.  Here is the beta-computation spreadsheet.   The Excel spreadsheet uses three different methods to compute β, and they produce nearly identical results.  I used 3 years of weekly adjusted closing-price data for the computations.  R2 and α (alpha) are also computed.   The “nearly” part of identical gives me a bit of pause — is it simply round off, or are there errors?  Please let me know if you see any.

An ancient saying goes “Seek not to follow in the footsteps of men of old; seek what they sought.”   The path of “modern” portfolio theory leaves behind many footprints, including β and R-squared.  Today, the computation of these numbers is a simple academic exercise.  The fact that these numbers represent closed-form solutions (CFS) to some important financial questions has an almost irresistible appeal to many quantitative analysts and finance academics.   CFS were just the steps along the path;  the goal was building better portfolios.

Markowitz’s tools were mathematics, pencils, paper, a slide rule, and books of financial data.  The first handheld digital calculator wasn’t invented until 1967.  As someone quipped, “It’s not like he had a Dell computer on his desk.”   He used the mathematical tools of statistics developed more than 30 years prior to his birth.  A consequence of his environment is Markowitz’s (primary) definition of risk:  mean variance.  When first learning about mean-variance optimization (MVO), almost every astute learner eventually asks the perplexing question “So upside ‘risk’ counts the same as the risk of loss?”  In MTP, the answer is a resounding “Yes!”

The current year is 2012, and most sophisticated investors are still using tools developed during the slide-rule era.  The reason the MVO approach to risk feels wrong is because it simply doesn’t match the way clients and investors define risk.  Rather than adapt to the clients’ view of risk, most investment advisers, ratings agencies, and money managers ask the client to fill out a “risk tolerance” questionnaire that tries to map investor risk models into a handful of MV boxes.

MPT has been tweaked and incrementally improved by researchers like Sharpe and Fama and French — to name a few.  But the mathematically convenient MV definition of risk has lingered like a baseball pitcher’s nagging shoulder injury.  Even if this metaphorical “injury” is not career-ending, it can be career-limiting.

There is a better way, though it has a clunky name:  Post-Modern Portfolio Theory (PMPT).  [Clearly most quants and financial researchers are not good marketers… Next-Gen Portfolio Optimization, instead?]   The heart of PMPT can be summed up as “minimizing downside risk as measured by the standard deviation of negative returns.  “A good overview of PMPT in this Journal of Financial Planning Article.  This quote for that article stands out brilliantly:

Markowitz himself said that “downside semi-variance” would build better portfolios than standard deviation. But as Sharpe notes, “in light of the formidable computational problems…he bases his analysis on the variance and standard deviation.”

“Formidable computational problems” of 1959 are much less so today.  Financial companies are replete with processing power, data storage and computer networks.  In some cases developing efficient software to use certain PMPT concepts is easy, in other cases it can be extremely challenging.   (Please note the emphasis on the word ‘efficient’.   An financial algorithm that takes months to complete is unlikely to be of any practical use.)   The example Excel spreadsheet could easily be modified to compute a PMPT-inspired beta. [Hint:  =IF(C4>0, 0, C4)]

Are you ready step off the beaten path constructed 50 years ago by wise men with archaic tools?   To step onto the hidden path they might have blazed, if armed with powerful computer technology?  Click the link to start your journey on the one less traveled by.