Engineering Profit versus Theoretical Profit

Either there is a veil of silence covering the world of finance, or the obvious parallels between electrical engineering (EE) have been overlooked.   I suspect the former.

Almost every EE worth their salt has been exposed to the concepts of signals and signal processing in undergrad.  From signal-to-noise ratios (SNR) to filters (dB/decade) to digital signal processors (DSPs), EE’s are trained to be experts at receiving the signal in spite of the noise.  More technobabble (but its not!) are the Fourier and Laplace transforms we routinely use to analyze the propagation of signals through circuits.  Not to mention wave-guides, complex-conjugate reflections, amplitude- and frequency- modulation, etc.   Then there are the concepts of signal error detection, error correction, and information content.

My point is that financial firms made a mistake in hiring more physicists than electrical engineers.  At the end of the day (or the project) the work of the EE has to stand up to more than just academic scrutiny; it has to stand up to the real world — real products, real testing, real use.

EE’s with years of experience have been there and done that.  Mind you, most are not interested in finance.  However, a handful of us are deeply interested in finance and investing.

These thoughts occurred to me as I was listening to speakers I built 15 years ago.  They still sound spectacular (unglaublich gut, for you Germans).  They are now my second-tier speakers relegated to computer audio.  Naturally, I have an amp fed by Toslink 48K/s 20-bit per channel audio data. My point is that these speakers have audio imaging that is achieved by a smooth first-order crossover with tweaters/speakers chosen to support phase-accurate performance over a the frequencies that the human ear can best make use of audio imaging.

My second point is that a lot of engineering went into these speakers.   This engineering goes beyond electrical.   Speakers are fundamentally in the grey region between mechanical and electrical engineering.  However the mechanical parameters can be “mapped” into the “domain” of electrical engineering concepts.  This positions EEs to pick the best designs and combine them in most advantageous designs  on a maximum value- per-dollar basis.

This post is targeting a different audience than most.  Apologies.  An EE with a CS (computer science) background is an even better choice..

The analysis of financial data as concurrent, superimposed discrete waveforms is natural to EEs as air is to mammals and water is is to fish.  Audio is, perhaps, the simplest application.   Just Google “Nyquist-Shannon” if you want to know of which I speak.

I’m not for hire — I only do contract work.  I’m just telling hiring managers to both broaden and restrict their search criteria.  A well-qualified EE with financial expertise and a passion for finance is likely to be a a better candidate than a Ph.D. in Physics.  Don’t hire Sheldon Cooper until you evaluate Howard Wolowitz (not an EE, but you get my point, I hope).

Advertisement

Beta Software, First Month

This marks the first month (30 days) of engagement with beta financial partners.  The goal is to test Sigma1 HAL0 portfolio-optimization software on real investment portfolios and get feedback from financial professionals.  The beta period is free.  Beta users provide tickers and expected-returns estimates via email, and Sigma1 provides portfolio results back with the best Sharpe, Sortino, or Sharpe/Sortino hybrid ratio results.

HAL0 portfolio-optimization software provides a set of optimized portfolios, often 40 to 100 “optimal” portfolios, optimized for expected return, return-variance and return-semivariance.   “Generic” portfolios containing a sufficiently-diverse set of ETFs produce similar-looking graphs.  A portfolio set containing SPY, VTI, BND, EFA, and BWX is sufficient to produce a prototypical graph.  The contour lines on the graph clearly show a tradeoff between semi-variance and variance.

 

Portfolio Optimization, Variance, Semi-Variance, and Total Return

Portfolio Optimization Graph

 

Once the set of optimized portfolios has been generated the user can select the “best” portfolio based on their selection criteria.

So far I have learned that many financial advisers and fund managers are aware of post-modern portfolio theory (PMPT) measures such as semivariance, but also a bit wary of them.  At the same time, some I have spoken with acknowledge that semivariance and parts of PMPT are the likely future of investing.  Portfolio managers want to be equipped for the day when one of their big investors asks, “What is the Sortino ratio of my portfolio? Can you reduce the semi-variance of my portfolio?”

I was surprised to hear that all of Sigma1 beta partners are interested exclusively in a web-based interface. This preliminary finding is encouraging because it aligns with a business model that protects Sigma1 IP from unsanctioned copying and reverse-engineering.

Another surprise has been the sizes of the asset sets supplied, ranging from 30 to 50 assets. Prior to software beta, I put significant effort into ensuring that HAL0 optimization could handle 500+ asset portfolios. My goal, which I achieved, was high-quality optimization of 500 assets in one hour and overnight deep-dive optimization (adding 8-10 basis points of additional expected-return for a given variance/semi-variance). On the portfolio assets provided to-date, deep-dive runtimes have all been under 5 minutes.

The best-testing phase has provided me with a prioritized list of software improvements. #1 is per-asset weighting limits. #2 is an easy-to-use web interface. #3 is focused optimization, such as the ability to set max variance.  There have also been company-specific requests that I will strive to implement as time permits.

Financial professionals (financial advisers, wealth managers, fund managers, proprietary trade managers, risk managers, etc.) seem inclined to want to optimize and analyze risk in both old ways (mean-return variance) and new (historic worst-year loss, VAR measures, tail risk, portfolio stress tests, semivariance, etc.).

Some Sigma1 beta partners have been hesitant to provide proprietary risk measure algorithms.  These partners prefer to use built-in Sigma1 optimizations, receive the resulting portfolios, and perform their own in-house analysis of risk.  The downside of this is that I cannot optimize directly to proprietary risk measures.  The upside is that I can further refine the HAL0 algos to solve more universal portfolio-optimization problems.  Even indirect feedback is helpful.

Portfolio and fund managers are generally happy with mean-return variance optimization, but are concerned that semivariance-return measures are reasonably likely to change the financial industry in the coming years.   Luckily the Sharpe ratio and Sortino ratio differ by only the denominator (σp versus σd) .  By normalizing the definitions of volatility (currently called modified-return variance and modified-return semivariance) HAL0 software optimizes simultaneously for both (modified) Sharpe and Sortino ratios, or any Sharpe/Sortino hybrid ratios in-between.  A variance-focused investor can use a 100% variance-optimized portfolio.  An investor wanting to dabble with semi-variance can explore portfolios with, say, a 70%/30% Sharpe/Sortino ratio.   And an investor, fairly bullish on semivariance minimization, could use a 20%/80% Sharpe/Sortino hybrid ratio.

I am very thankful to investment managers and other financial pros who are taking the time to explore the capabilities of HAL0 portfolio-optimization software.  I am hopeful that, over time, I can persuade some beta partners to become clients as HAL0 software evolves and improves.  In other cases I hope to provide Sigma1 partners with new ideas and perspectives on portfolio optimization and risk analysis.  Even in one short month, every partner has helped HAL0 software become better in a variety of ways.

Sigma1 is interested in taking on 1 or 2 additional investment professionals as beta partners.  If interested please submit a brief request for info on our contact page.

 

Flaws in Stock and ETF Charts

Making sense of stock charts

Traditional Stock Charts can Mislead

Almost every stock chart presents incomplete data for a security’s total return.  Simply put, stock charts don’t reflect dividends and distributions.  Stock charts simply show price data.  A handful of charts superimpose dividends over the price data.  Such charts are an improvement, but require mental gymnastics to correctly interpret total return.

At the end of the year, I suspect the vast majority of investors are much more interested in how much money they made than whether their profits come from asset appreciation, dividends, interest or other distributions.  In the case of tax-differed or tax-exempt accounts (such as IRA, Roth IRAs, 401k, etc. accounts) the source of returns is unimportant.  Naturally, for other portfolios, some types of return are more tax-advantaged than others.  In one case I tried to persuade a relative that MUB (iShares S&P National AMT-Free Muni Bd) was a good investment for them in spite of it’s chart, because the chart did not show the positive tax impact of tax-exempt income.

Our minds see what they want to see.  When we compare two stocks (or ETFs) we often have a slight bias towards one.  If we see what we want in a stock’s chart, we may look past the dividend annotations and make a incorrect decision.

This 1-year chart comparing two ETFs illustrates this point.  These two ETFs track each other reasonably well until Dec 16th, where there is a sharp drop in PBP.  This large dip reflects the effect of a large distribution of roughly 10%.  Judging strictly by the price data, it at first appears that SPY beats PBP by 7%.  When factoring the yield of PBP, about 10.1%, and SPY, roughly 1.9%, shows a 1.2% 1-year out-performance by PBP.  First appearances show SPY outperforming;  a little math shows PBP outperforming.

Yahoo! Finance provides raw data adjusted for dividends and distributions.  Using the 1-year start and end data shows SPY returning a net 3.77%, and PBP returning a net 4.96%.  The delta shows a 1.19% out performance by PBP.  Yahoo! Finance’s table have all the right data;  I would love to see Yahoo! add an option to display this adjusted-price data graphically.

Total return is not a new concept.  Bill Gross was very insightful in naming PIMCO’s “Total Return” lineup of funds over 25 years ago.  Many mutual funds provide total return charts.  For instance, Vanguard provides total return charts for investments such as Vanguard Total Stock Market Index Fund Admiral Shares.  I am pleased to see Fidelity offering similar charts for ETFs in research “performance” reports for its customers.  Unfortunately, I have not found a convenient way to superimpose two total-return charts.

While traditional stock and ETF charts do not play a large roll in my investment decisions, I do look at them when evaluating potential additions to my investment portfolio.  When I do look at charts, I’d prefer to have the option of looking at total return charts rather than “old fashioned” price charts.

That said, I prefer to use quantitative portfolio analysis as my primary asset allocation technology.  For such analysis I compute total return data for each asset from price data and distribution data, assuming reinvestment.  Reformatting asset data in this way allows HAL0 portfolio-optimization software to directly compare different asset classes (gold, commodities, stock ETFs, bond ETFs, leveraged ETFs, etc).  Moreover, such pre-formatting allows faster computation of risk for various asset allocations within a portfolio.

A large part of my vision for Sigma1 is revolutionizing how investors and money managers visualize and conceptualize portfolio construction.  The key pieces of that conceptual revolution are:

  1. Rethinking return to always mean total return.
  2. Rethinking risk to mean something other than variance or standard deviation.

Many already think of total return as the key measure of raw portfolio performance.  It is odd, then, that so many charts display something other than total return.  And some would like to measure, manage, and model risk in more robust ways.  A major obstacle to alternate risk measures is a dearth of financial portfolio optimization tools that work with PMPT models such as semi-variance.

HAL0 is designed from the ground up to address the goals of optimizing portfolios based on total return and a wide variety of advanced, more-robust risk models.  (And, yes, total return can be defined in terms of after-tax total return, if desired.)

Disclosure:  I have long positions in SPY, the Vanguard Total Stock Market Index, and PBP.

 

 

 

 

Greener Software is Better Software

CPU Load and Green Software

CPU Load Correlates with Power Consumption

Faster Software is Greener Software

Simply put, when one software product is more efficient than another, it runs faster and takes less time to solve the same problem.  The less time software takes to run, the less power is consumed.

By way of illustration, consider the efficiency of a steam ship going from New York to San Fransisco before and after the Panama Canal was built.  The canal was a technological marvel of its time, and it cut the journey distance from 13,000 miles to 5,000.  It cut travel time by (more than) half, and reduced the journey’s coal consumption by 50%.  The same work was performed, with the same “hardware” (the steamer), but in just 30 days rather than 60, and using half the fuel.

Faster run time is the most significant and most visible component of green software, but it is not the only significant factor.  Other factors affecting how much power software consumes include:

  • Cache miss rate
  • Streamlined versus bloated, crufty software
  • Use of best-suited hardware resources
  • Algorithm scalability

Without getting too technical, I’ll briefly touch on each bullet point.  A cache hit is when a CPU finds the information it needs in its internal cache memory, while a cache miss is when the CPU must send an off-chip request to the computer’s RAM to get the required data.  A cache miss is about 100x slower than a cache hit, in part because the data has to travel about 10cm for a cache miss, versus about 5mm for a cache hit.  The difference in power consumption between a cache hit and a cache miss easily be 20x to 100X, or more.

Most software starts out reasonably streamlined.  Later, if the software is popular, comes a time when enhancement requests and bug reports come in faster than developers can implement them in a streamlined manner.  Consequently many developers implement quick but inefficient fixes.  Often this behavior is encouraged by managers trying to hit aggressive schedule commitments.  The developers have intentions to come back and improve the code, but frequently their workload doesn’t permit that.  After a while developers forget where the software “kludges” or hack are.  Even worse, the initial developers either get reassigned to other projects or leave for other jobs.  The new software developers are challenged learn the unfamiliar code and implement fixes and enhancements — adding their own cruft.   This is how crufty, bloated software emerges:  overworked developers, focused on schedule over software efficiency, and developer turnover.

Modern CPUs have specialized instructions and hardware for different compute operations.  One example is Intel SSE technology which features a variety of Single-Instruction, Multiple-Data (SIMD) extensions.  For example, SSE4 (and AVX) can add 4 or more pairs of numbers (2 4-number vectors) in one operation, rather than 4 separate ADD operations.  This reduces CPU instruction traffic and saves power and time.

Finally algorithm scalability is increasingly important to modern computing and compute efficiency.  Scalability has many meanings, but I will focus on the ability of software to use multiple compute resources in parallel.  [Also known as parallel computing.]   Unfortunately most software in use today has limited or no compute-resource scalability.  This means that this software can only use 1 core of a modern 4-core CPU.  In contrast, linearly-scalable software could run 3x faster by using 3 of the 4 cores at full speed.  Even better, it could run 3x faster on 4 cores running at 75% speed, and consume about 30% less power.  [I’ll spare you the math, but if you are curious this link will get you started.]

“Distributed Software” is Greener

Distributed computing is technology that allows compute jobs to be distributed into the “cloud” or data center queue.  Rather than having desktop workstations sitting idle much of the day, a data center is a room full of computers that direct compute jobs to the least busy computers.  Jobs can be directed to the computers best-suited to a particular compute request. Intelligent data centers can even put unused computers into “deep sleep” mode that uses very little power.

I use the term distributed software to mean software that is easily integrated with a job-submission or queuing software infrastructure.  [Short for distributed-computing-capable software.]  Clearly distributed software benefits directly from the efficiencies of a given data center.  Distributed software can also benefit from the ability to run in parallel on multiple machines.  The more tightly-coupled with the capabilities and status of the data center, the more efficiently distributed software can adapt to dynamic changes.

Sigma1 Software is Green

Sigma1 financial software (code-named HAL0) has been designed from the ground up to be lean and green.  First and foremost, HAL0 (code-named in honor of Arthur C. Clarke’s HAL 9000 — “H-A-L is derived from Heuristic ALgorithmic (computer)”) is architected to scale near-linearly to tens or hundreds of cores, “sockets”, or distributed machines.  Second, the central kernel or engine is designed to be as light-weight and streamlined as possible — helping to reduce expensive cache misses.  Third, HAL0 uses Heuristic Algorithms and other “AI” features to efficiently navigate astronomically-large search spaces (10^18 and higher).  Fourth, HAL0 uses an innovative computation cache system that allows repeated complex computations to be looked up in the cache, rather than recomputed.  In alpha testing, this feature alone accounted for a 3X run-time improvement.  Finally, HAL0 portfolio software incorporates a number of more modest run-time and power-saving features such as coding vector operations explicitly as vector operations, thus allowing easier use of SIMD and possibly GPGPU instructions and hardware.

Some financial planners still use Microsoft Excel to construct and optimize portfolios.  This is slow and inefficient — to say the least.  Other portfolio software I have read about is an improvement over Excel, but doesn’t mention scalability nor heuristic algorithms.  It is possible, perhaps likely, that other financial software with some the capabilities of HAL0 exists.  I suspect, however, that if it does, it is proprietary, in-house software that is not for sale.

A Plea for Better, Greener Software

In closing, I’d like the software community to consider how the efficiency (or inefficiency) of their current software products contribute to world-wide power consumption.  Computer hardware has made tremendous strides to improving performance/power in the last ten years, and continues to do so.   IT and data-center technology is also becoming more power efficient.  Unfortunately, most software has been trending in the opposite direction — becoming more bloated and less efficient.  I urge software developers and software managers to consider the impact of the software they are developing.  I challenge you to consider, probably for the first time, how many kilowatt- or megawatt-hours your current software is likely to consume.  Then ask yourself, “How can I reduce that power?”

Seeking a Well-Matched Angel Investor (Part I)

Most of the reading I have done regarding angel investing suggests that finding the right “match” is a critical part of the process.  This process is not just about a business plan and a product, it is also about people and personalities.

Let me attempt to give some insight into my entrepreneurial personality.  I have been working (and continue to work) in a corporate environment for 15 years. Over that time I have received a lot of feedback.  Two common themes emerge from that feedback.  1)  I tend to be a bit too “technical”.  2)  I tend to invest more effort on work that I like.

Long Story about my Tech Career

Since I work in the tech industry, being too technical at first didn’t sound like something I should work on.  I eventually came to understand that this wasn’t feedback from my peers, but from managers.   Tech moves so fast that many managers simply do not keep up with these changes except in the most superficial ways.  (Please note I say many, not most).  While being technical is my natural tendency, I have learned to adjust the technical content to suite the  composition of the meeting room.

The second theme has been a harder personal challenge.  Two general areas I love are technical challenges and collaboration.  I love when there is no “smartest person in the room” because everybody is the best at at least one thing, if not many.  When a team like that faces a new critical issue — never before seen — magic often occurs.  To me this is not work; it is much closer to play.

I have seen my industry, VLSI and microprocessor design, evolve and mature.  While everyone is still the “smartest person in the room”, the arrival of novel challenges is increasingly rare.   We are increasingly challenged to become masters of execution rather than masters of innovation.

Backing up a bit, when I started at Hewlett-Packard, straight out of college, I had the best job in the world, or darn near.  For 3-4 months I “drank from a fire hose” of knowledge from my mentor.  After just 6 months I was given what, even in retrospect, was tremendous responsibilities (and a nice raise).  I was put in charge of integrating “logic synthesis” software into the lab’s compute infrastructure.  When I started, about 10% of the lab’s silicon area was created via synthesis; when I left 8 years later about 90% of the lab’s silicon was created via logic synthesis.  I was part of that transformation, but I wasn’t the cause — logic synthesis was simply the next disruptive technology in the industry.

So why did change companies?  I was developing software to build advanced “ASICs”.  First the company moved ASIC manufacturing overseas, then increasingly ASIC hardware design.  The writing was on the wall… ASIC software development would eventually move.  So I made a very difficult choice and moved into microprocessor software development.  Looking back now, this was the likely the best career choice I have ever made.

Practically overnight I was again “drinking from a fire hose.”   Rather than working with software, my former teammates and I had built from scratch, I was knee-deep in poorly-commented code that been abandoned by all but one of the original developers.  In about 9 months my co-developer and I had transformed this code into something that resembled properly-architected software.

Again, I saw the winds of change transforming my career environment: this time, microprocessor design.  Software development was moving from locally-integrated hardware/software design labs to a centralized software-design organization.  Seeing this shift, I moved within the company, to microprocessor hardware design.  Three and a half years later I see the pros and cons of this choice.  The largest pro is having about 5 times more opportunities in the industry — both within the company, and without.  The largest con, for me, is dramatically less software development work.  Hardware design still requires some software work, perhaps, 20-25%.  Much of this software design, however, is very task-specific.  When the task is complete — perhaps after a week or a month — it is obsolete.

A Passion for Software and Finance

While I was working, I spent some time in grad school. I took all the EE classes that related to VLSI and microprocessor design. The most interesting class was an open-ended research project. The project I chose, while related directly to microprocessor design, had a 50/50 mix of software design and circuit/device-physics research. I took over the software design work, and my partner took on most of the other work. The resulting paper was shortened and revised (with the help of our professor and third grad student) and accepted for presentation at the 2005 Society of Industrial and Applied Mathematics (SIAM) Conference in Stockholm, Sweden.  Unfortunately, none of us where able to attend due to conflicting professional commitments.

Having exhausted all “interesting” EE/ECE courses, I started taking grad school courses in finance.  CSU did not yet have a full-fledged MSBA in Financial Risk Management program, but it did offer a Graduate Certificate in Finance, which I earned.  Some research papers of note include “Above Board Methods of Hedging Company Stock Option Grants” and “Building an ‘Optimal’ Bond Portfolio including TIPS.”

Software development has been an interest of mine since I took a LOGO summer class in 5th grade.  It has been a passion of mine since I taught myself “C” in high school.  During my undergrad in EE, I took enough CS electives to earn a Minor in Computer Science along with my BSEE.   Almost all of my elective CS courses centered around algorithms and AI.   Unlike EE, which at times I found very challenging, I found CS courses easy and fun.  That said, I earned straight A’s in college, grad and undergrad, with one exception: I got a B- in International Marketing.  Go figure.

My interest in finance started early as well.  I had a paper route at the age of 12, and a bank account.  I learned about compound interest and was hooked.  With help from my Dad, and still 12 years old, I soon had a money market account and long-maturity zero-coupon bond.  My full-fledged passion for finance developed when I was issued my first big grant of company stock options.  I realized I knew quite a bit about stocks, bonds, CD’s and money market funds, but I knew practically nothing about options.  Learning about options was the primary reason I started studying Finance in grad school.  I was, however, soon to learn about CAPM and MPT, and portfolio construction and optimization.  Since then, trying to build the “perfect” portfolio has been a lingering fascination.

Gradually, I began to see flaws in MPT and the efficient-markets hypothesis (EMH).  Flaws that Markowitz acknowledged from the beginning!  [Amazing what you can learn from going beyond textbooks, and back to original sources.]   I read in some depth about the rise and demise of Long-Term Capital Management.  I read about high-frequency trading methods and algorithms.  I looked into how options can be integrated into long-term portfolio-building strategies.  And finally, I started researching the ever-evolving field of Post-Modern Portfolio Theory (PMPT.)

When I finally realized how I could integrate my software development skills, my computer science (AI) background, my graduate EE/ECE work and my financial background into a revolutionary software product, I was thunderstruck. I can and did build the alpha version of this product, HAL0, and it works even better than I expected.  If I can turn this product into a robust business, I can work on what I like, even what I love.  And that passion will be a strength rather than a “flaw”.   Send me an angel!

 

Toss your Financial Slide-rule: Beta Computation, MPT, and PMPT

Let me take you back to grad school for a few moments, or perhaps your college undergrad. If you’ve studied much finance, you’ve surely studied beta in the context of modern portfolio theory (MPT) and the Capital-Asset Pricing Model (CAPM). If you are a quant like me, you may have been impressed with the elegance of the theory. A theory that explains the value and risk of a security, not in isolation, but in the context of markets and portfolios.

Markowitz‘s MPT book, in the late 50’s, must have come as a clarion call to some investment managers.  Published ten years prior, Benjamin Graham’s The Intelligent Investor was, perhaps, the most definitive book of its time.  Graham’s book described an intelligent  portfolio as a roughly 50/50 stock/bond mix, where each stock or bond had been selected to provide a “margin of safety”.  Graham provided a value-oriented model for security analysis; Markowitz provided the tools for portfolio analysis.  Markowizt’s concept of  beta added another dimension to security analysis.

As I explore new frontiers of portfolio modeling and optimization, I like to occasionally survey the history of the evolving landscape of finance.  My survey lead me to put together a spreadsheet to compute β.  Here is the beta-computation spreadsheet.   The Excel spreadsheet uses three different methods to compute β, and they produce nearly identical results.  I used 3 years of weekly adjusted closing-price data for the computations.  R2 and α (alpha) are also computed.   The “nearly” part of identical gives me a bit of pause — is it simply round off, or are there errors?  Please let me know if you see any.

An ancient saying goes “Seek not to follow in the footsteps of men of old; seek what they sought.”   The path of “modern” portfolio theory leaves behind many footprints, including β and R-squared.  Today, the computation of these numbers is a simple academic exercise.  The fact that these numbers represent closed-form solutions (CFS) to some important financial questions has an almost irresistible appeal to many quantitative analysts and finance academics.   CFS were just the steps along the path;  the goal was building better portfolios.

Markowitz’s tools were mathematics, pencils, paper, a slide rule, and books of financial data.  The first handheld digital calculator wasn’t invented until 1967.  As someone quipped, “It’s not like he had a Dell computer on his desk.”   He used the mathematical tools of statistics developed more than 30 years prior to his birth.  A consequence of his environment is Markowitz’s (primary) definition of risk:  mean variance.  When first learning about mean-variance optimization (MVO), almost every astute learner eventually asks the perplexing question “So upside ‘risk’ counts the same as the risk of loss?”  In MTP, the answer is a resounding “Yes!”

The current year is 2012, and most sophisticated investors are still using tools developed during the slide-rule era.  The reason the MVO approach to risk feels wrong is because it simply doesn’t match the way clients and investors define risk.  Rather than adapt to the clients’ view of risk, most investment advisers, ratings agencies, and money managers ask the client to fill out a “risk tolerance” questionnaire that tries to map investor risk models into a handful of MV boxes.

MPT has been tweaked and incrementally improved by researchers like Sharpe and Fama and French — to name a few.  But the mathematically convenient MV definition of risk has lingered like a baseball pitcher’s nagging shoulder injury.  Even if this metaphorical “injury” is not career-ending, it can be career-limiting.

There is a better way, though it has a clunky name:  Post-Modern Portfolio Theory (PMPT).  [Clearly most quants and financial researchers are not good marketers… Next-Gen Portfolio Optimization, instead?]   The heart of PMPT can be summed up as “minimizing downside risk as measured by the standard deviation of negative returns.  “A good overview of PMPT in this Journal of Financial Planning Article.  This quote for that article stands out brilliantly:

Markowitz himself said that “downside semi-variance” would build better portfolios than standard deviation. But as Sharpe notes, “in light of the formidable computational problems…he bases his analysis on the variance and standard deviation.”

“Formidable computational problems” of 1959 are much less so today.  Financial companies are replete with processing power, data storage and computer networks.  In some cases developing efficient software to use certain PMPT concepts is easy, in other cases it can be extremely challenging.   (Please note the emphasis on the word ‘efficient’.   An financial algorithm that takes months to complete is unlikely to be of any practical use.)   The example Excel spreadsheet could easily be modified to compute a PMPT-inspired beta. [Hint:  =IF(C4>0, 0, C4)]

Are you ready step off the beaten path constructed 50 years ago by wise men with archaic tools?   To step onto the hidden path they might have blazed, if armed with powerful computer technology?  Click the link to start your journey on the one less traveled by.

New Perspectives on Portfolio Optimization

Portfolio Risk/Reward Contours

Risk/Reward Contours for 100 Optimized Portfolios

Building superior investment portfolios is what money managers are paid to do. As a fund manager, I wanted software to help me build superior, positive-alpha portfolios.

Not finding software that did anything like I wanted, I decided to write my own.

When I build or modify a portfolio I start with investment ideas. Ideas like going short BWX (international government debt) and long JNK (US junk bonds). I want some US equity exposure with VTI and some modest buy-write protection through ETB. And I have a few stocks that I believe are likely to outperform the market. What I’d like is portfolio software that will take my list of stocks, ETFs, and other securities and show me the risk/reward trade off for a variety of portfolios comprised of these securities.

Before I get too far ahead of myself, let me explain the above graphic. It uses two measures of risk and a proprietary measure of expected return. The risk measures are 3-year portfolio beta (vs. the S&P500), and sector diversification. This risk measures are transformed into “utility metrics”, which simply means bigger is better. By maximizing utility, risk is minimized.

The risk utility metrics (or heuristics) are set up as follows. 10 is the absolute best score and 0 the worst. In this graph a beta of 1.0 results in a beta “risk metric” of 10. A beta of infinity would result in a beta risk metric of 0. For this simulation, I don’t care about betas less than 1, though they are not excluded. The sector diversification metric measures how closely any portfolio matches sector market-cap weights in the S&P 500. A perfect match scores a 10. The black “X” surrounded by a white circle denotes such a perfectly balanced portfolio. In fact this portfolio is used to seed the construction of the wide range of investment portfolios depicted in the chart.

On thing is immediately clear. Moving away from the relative safety of the 10/10 corner, expected returns increase, from 7.8% up to 15%. Another observation is that the software doesn’t think there is much benefit in increased beta (decreased beta metric) unless sector diversification is also decreased.  [This is the software “talking”, not my opinion, per se.]

The contour lines help visualize the risk tradeoffs (trading beta risk for non-diversification risk) for a particular expected rate of return.  The pink 11% return contour looks almost linear — an outcome I find a bit surprising given the non-linear risk-estimation heuristics used in the modeling.

For all that the graphic shows, there is much it does not.  It does not show the composition or weightings of securities used to build the 100 portfolios whose scores appear.  That data appears in reports produced by the portfolio-tuner software.  The riskiest, but highest expected-return portfolios are heavy in financials and, intriguingly, consumer goods.  More centrally-located portfolios, with expected returns in the 11% range, are over-weighted in the basic materials, services (retail), consumer goods, financial, and technology sectors.

Back to the original theme: desirable features of financial software — particularly portfolio-optimization software.  For discussion, let’s assign the codename HAL0 (HAL zero in homage to HAL 9000) to this portfolio-optimizing software.  I don’t want dime-a-dozen stock/ETF screeners, but I do want software that I can ask “HAL0, help me build a complete portfolio by finding securities that optimally complements this 70% core of securities.”  Or “HAL, let’s create an volatility-optimized portfolio based on this particular list of securities, using my expected rates of return.”  Even, “HAL, forget volatility, standard-deviation, etc, and use my measures of risk and return, and build a choice of portfolios tuned and optimized to these heuristics”.

These are things the alpha version of HAL0 can do today (except for understanding English… you have to speak HAL’s language to pose your requests).  The plot you see was generated from data generated in just under 3 hours on an inexpensive desktop running Linux.  That run used 10,000 iterations of the optimization engine.  However 100 iterations, running in a mere 2 minutes, will produce a solution-space that is nearly identical.

HAL0 supports n-dimensional solution spaces (surfaces, frontiers), though I’ve only tested 2-D and 3-D so far.  The fact that visualizing 4-D data would probably involve an animated 3-D video makes me hesitate.  And preserving “granularity” requires an exponential scaling in time complexity.  Ten data points provides acceptable granularity for a 2-D optimization, 100 data points is acceptable for 3-D, and 1000 data points for 4-D.  Under such conditions the 4-D sim would be a bit more than 10x slower.  If a granularity of 20 is desired, the 3-D sim would be slowed by 4X, and a 4-D optimization by an additional 8X.   I have considered the idea that a 4-D optimization could be used for a short time, say 10 iterations and/or with low granularity (say 8),  and then one of the utility heuristics could be discarded and 3-D optimization (with higher depth and granularity )could continue from there… nothing in the HALo software precludes this.

HAL0 is software built to build portfolios.  It uses algorithms from software my partner and I developed in grad school to solve engineering problems– algorithms that built upon evolutionary algorithms, AI, machine learning and heuristic algorithms.  HAL0 also incorporates ideas and insights that I have had in the intervening 8 years.  Incorporated into its software DNA are features that I find extremely important:  robustness, scalability and extensibility.

Today HAL0 can construct portfolios comprised of stocks, ETFs, and highly-liquid bonds and commodities.   I have not yet figured out a satisfactory way to include options, futures, or assets such as non-negotiable CDs into the optimization engine.  Nor have I implemented multi-threading nor distributed computing, though the software is designed from the ground up to support these scalability features.

HAL0 is in the late alpha-testing phase.  I plan to have a web-based beta-testing model ready by the end of 2012.

Disclaimer:  Do not make adjustments to your investment portfolio without first consulting a registered investment adviser (RIA), CFP or other investment professional.

 

 

Financial Software: Heuristics Explained

Software visualization

Abstract Visual of Software

A Baseball Analogy

Imagine you’re the general manager of a Major League ball club.  Your primary job is to construct (and maintain) a team of players  that will win lots of games, while keeping the total player payroll as low as possible.  When considering a hypothetical roster a baseball GM has two primary objectives in mind:

  1. Total annual payroll (plus any associated “luxury tax”)
  2. Expected season wins (and post-season wins)

These objectives can also be called heuristics — rules of thumb to help find solutions to complex problems.   These heuristics can be turned into numbers (quantified) by creating cost functions or utility functions.  Please don’t let all of this jargon disembolden you; we are merely talking a little baseball here.

The cost function function for payroll is just that… the total annual salaries for a proposed roster.  It is called a cost function because cost is something we are trying to minimize.  Expected wins is called a utility function, because utility is good, and we want to maximize it.

Now, accurately predicting number of wins for a hypothetical (or real) roster of players is a real challenge.  Every scout and adviser is going to have his or her own ideas or heuristics.  Just watch Moneyball to see what I mean.  To turn any given roster into a utility score a GM could write a proposed roster on a whiteboard and point-blank ask each advisory “How many wins will this team produce?”  The GM could average these predictions and, boom!, that’s an utility function.  The GM could also hire a computer scientist and statistician to code up a utility function for any proposed roster relying on a chosen set of stats.

Either way, now the GM has can evaluate any proposed roster based on two metrics: cost and wins.  These data can be plotted, and quickly patterns will emerge.  Some proposed rosters will be both more expensive and less “winning” than others.  These rosters are said to be dominated, and they can be removed from consideration.  Once all the dominated rosters are eliminated, what remains is a series of dots that form a curve.  As one moves up that curve, one finds more winning, but more expensive rosters.  Moving the other way, the payroll cost is less, but the expected wins decrease.  This curve resembles what financial folks call an efficient frontier — the expected risk/reward tradeoff for an optimized portfolio selected from a basket of securities.

Back to Portfolio Optimization Software

The baseball analogy above tries to explain mathematical concepts without resorting to math.   OK, I did use a few math words, but no equations!

There are several differences between a baseball roster and an investment portfolio.  Key differences from an investment portfolio are:  1) You can own multiple shares of a stock or ETF (but have only 1 of any player),  2) You can trade stocks/ETFs virtually whenever you want.

Nonetheless, the baseball analogy is useful in illustrating what Sigma1 Software will be able to do for fund managers and investors.  Instead of building a baseball roster, you are building an investment portfolio.  In the classic “CAPM” investing model, the cost function is standard deviation (risk), and the utility function is expected returnHistorical standard deviation is easy to compute, but expected return is much harder to accurately compute.

Now, if you are an active fund manager, you probably have in-house analysts paid to help you pick stocks (just like GM’s have scouts).  But scouting reports from analysts do not a portfolio make… even if your analysts are giving you positive-alpha stock picks. A robust asset allocation strategy is necessary to build a robust portfolio out of your chosen list of securities.

The Vision for Sigma1 Portfolio Software

A Vision for Financial Professionals

It started with the desire to create software that would allow me to build a better portfolio for my proprietary trading fund — Software that could optimize portfolios using heuristics, cost functions, and utility functions of my own choosing.   I wanted to create portfolio software for investment managers that:

  • Allows them to select their own list of securities (or chosen dynamically from all investable securities)
  • Takes advantage of one or more “seed portfolios” if desired
  • Allows proprietary heuristics, cost functions, market models, etc. to plug seamlessly into the optimization engine
  • Isn’t limited to linear or Gaussian risk-analysis measures
  • Runs in minutes or hours, not days
  • Is capable of efficiently utilizing distributed and parallel computing resources — Scalability

A Vision for “Retail” Investors

For retail investors, the general investing public, I envision scaled-down versions of the professional portfolio optimization software.  The retail investor software will run as an application on a web server.  A free version will provide portfolio optimization for a small basket of user-chosen securities, perhaps limiting portfolio size to 10.   A paid-subscription plan will offer more features and allow retail users to build larger portfolios.

To keep the software easy to use, a variety of ready-to-use heuristics will be available.  These are likely to include:

  • Standard deviation
  • Historic best-year and worst-year analysis
  • Beta (versus common indices)
  • Diversification measures (e.g. sector, market-cap)
  • Price-to-earnings
  • Proprietary expected-return predictors