Greener Software is Better Software

CPU Load and Green Software

CPU Load Correlates with Power Consumption

Faster Software is Greener Software

Simply put, when one software product is more efficient than another, it runs faster and takes less time to solve the same problem.  The less time software takes to run, the less power is consumed.

By way of illustration, consider the efficiency of a steam ship going from New York to San Fransisco before and after the Panama Canal was built.  The canal was a technological marvel of its time, and it cut the journey distance from 13,000 miles to 5,000.  It cut travel time by (more than) half, and reduced the journey’s coal consumption by 50%.  The same work was performed, with the same “hardware” (the steamer), but in just 30 days rather than 60, and using half the fuel.

Faster run time is the most significant and most visible component of green software, but it is not the only significant factor.  Other factors affecting how much power software consumes include:

  • Cache miss rate
  • Streamlined versus bloated, crufty software
  • Use of best-suited hardware resources
  • Algorithm scalability

Without getting too technical, I’ll briefly touch on each bullet point.  A cache hit is when a CPU finds the information it needs in its internal cache memory, while a cache miss is when the CPU must send an off-chip request to the computer’s RAM to get the required data.  A cache miss is about 100x slower than a cache hit, in part because the data has to travel about 10cm for a cache miss, versus about 5mm for a cache hit.  The difference in power consumption between a cache hit and a cache miss easily be 20x to 100X, or more.

Most software starts out reasonably streamlined.  Later, if the software is popular, comes a time when enhancement requests and bug reports come in faster than developers can implement them in a streamlined manner.  Consequently many developers implement quick but inefficient fixes.  Often this behavior is encouraged by managers trying to hit aggressive schedule commitments.  The developers have intentions to come back and improve the code, but frequently their workload doesn’t permit that.  After a while developers forget where the software “kludges” or hack are.  Even worse, the initial developers either get reassigned to other projects or leave for other jobs.  The new software developers are challenged learn the unfamiliar code and implement fixes and enhancements — adding their own cruft.   This is how crufty, bloated software emerges:  overworked developers, focused on schedule over software efficiency, and developer turnover.

Modern CPUs have specialized instructions and hardware for different compute operations.  One example is Intel SSE technology which features a variety of Single-Instruction, Multiple-Data (SIMD) extensions.  For example, SSE4 (and AVX) can add 4 or more pairs of numbers (2 4-number vectors) in one operation, rather than 4 separate ADD operations.  This reduces CPU instruction traffic and saves power and time.

Finally algorithm scalability is increasingly important to modern computing and compute efficiency.  Scalability has many meanings, but I will focus on the ability of software to use multiple compute resources in parallel.  [Also known as parallel computing.]   Unfortunately most software in use today has limited or no compute-resource scalability.  This means that this software can only use 1 core of a modern 4-core CPU.  In contrast, linearly-scalable software could run 3x faster by using 3 of the 4 cores at full speed.  Even better, it could run 3x faster on 4 cores running at 75% speed, and consume about 30% less power.  [I’ll spare you the math, but if you are curious this link will get you started.]

“Distributed Software” is Greener

Distributed computing is technology that allows compute jobs to be distributed into the “cloud” or data center queue.  Rather than having desktop workstations sitting idle much of the day, a data center is a room full of computers that direct compute jobs to the least busy computers.  Jobs can be directed to the computers best-suited to a particular compute request. Intelligent data centers can even put unused computers into “deep sleep” mode that uses very little power.

I use the term distributed software to mean software that is easily integrated with a job-submission or queuing software infrastructure.  [Short for distributed-computing-capable software.]  Clearly distributed software benefits directly from the efficiencies of a given data center.  Distributed software can also benefit from the ability to run in parallel on multiple machines.  The more tightly-coupled with the capabilities and status of the data center, the more efficiently distributed software can adapt to dynamic changes.

Sigma1 Software is Green

Sigma1 financial software (code-named HAL0) has been designed from the ground up to be lean and green.  First and foremost, HAL0 (code-named in honor of Arthur C. Clarke’s HAL 9000 — “H-A-L is derived from Heuristic ALgorithmic (computer)”) is architected to scale near-linearly to tens or hundreds of cores, “sockets”, or distributed machines.  Second, the central kernel or engine is designed to be as light-weight and streamlined as possible — helping to reduce expensive cache misses.  Third, HAL0 uses Heuristic Algorithms and other “AI” features to efficiently navigate astronomically-large search spaces (10^18 and higher).  Fourth, HAL0 uses an innovative computation cache system that allows repeated complex computations to be looked up in the cache, rather than recomputed.  In alpha testing, this feature alone accounted for a 3X run-time improvement.  Finally, HAL0 portfolio software incorporates a number of more modest run-time and power-saving features such as coding vector operations explicitly as vector operations, thus allowing easier use of SIMD and possibly GPGPU instructions and hardware.

Some financial planners still use Microsoft Excel to construct and optimize portfolios.  This is slow and inefficient — to say the least.  Other portfolio software I have read about is an improvement over Excel, but doesn’t mention scalability nor heuristic algorithms.  It is possible, perhaps likely, that other financial software with some the capabilities of HAL0 exists.  I suspect, however, that if it does, it is proprietary, in-house software that is not for sale.

A Plea for Better, Greener Software

In closing, I’d like the software community to consider how the efficiency (or inefficiency) of their current software products contribute to world-wide power consumption.  Computer hardware has made tremendous strides to improving performance/power in the last ten years, and continues to do so.   IT and data-center technology is also becoming more power efficient.  Unfortunately, most software has been trending in the opposite direction — becoming more bloated and less efficient.  I urge software developers and software managers to consider the impact of the software they are developing.  I challenge you to consider, probably for the first time, how many kilowatt- or megawatt-hours your current software is likely to consume.  Then ask yourself, “How can I reduce that power?”

Seeking a Well-Matched Angel Investor (Part I)

Most of the reading I have done regarding angel investing suggests that finding the right “match” is a critical part of the process.  This process is not just about a business plan and a product, it is also about people and personalities.

Let me attempt to give some insight into my entrepreneurial personality.  I have been working (and continue to work) in a corporate environment for 15 years. Over that time I have received a lot of feedback.  Two common themes emerge from that feedback.  1)  I tend to be a bit too “technical”.  2)  I tend to invest more effort on work that I like.

Long Story about my Tech Career

Since I work in the tech industry, being too technical at first didn’t sound like something I should work on.  I eventually came to understand that this wasn’t feedback from my peers, but from managers.   Tech moves so fast that many managers simply do not keep up with these changes except in the most superficial ways.  (Please note I say many, not most).  While being technical is my natural tendency, I have learned to adjust the technical content to suite the  composition of the meeting room.

The second theme has been a harder personal challenge.  Two general areas I love are technical challenges and collaboration.  I love when there is no “smartest person in the room” because everybody is the best at at least one thing, if not many.  When a team like that faces a new critical issue — never before seen — magic often occurs.  To me this is not work; it is much closer to play.

I have seen my industry, VLSI and microprocessor design, evolve and mature.  While everyone is still the “smartest person in the room”, the arrival of novel challenges is increasingly rare.   We are increasingly challenged to become masters of execution rather than masters of innovation.

Backing up a bit, when I started at Hewlett-Packard, straight out of college, I had the best job in the world, or darn near.  For 3-4 months I “drank from a fire hose” of knowledge from my mentor.  After just 6 months I was given what, even in retrospect, was tremendous responsibilities (and a nice raise).  I was put in charge of integrating “logic synthesis” software into the lab’s compute infrastructure.  When I started, about 10% of the lab’s silicon area was created via synthesis; when I left 8 years later about 90% of the lab’s silicon was created via logic synthesis.  I was part of that transformation, but I wasn’t the cause — logic synthesis was simply the next disruptive technology in the industry.

So why did change companies?  I was developing software to build advanced “ASICs”.  First the company moved ASIC manufacturing overseas, then increasingly ASIC hardware design.  The writing was on the wall… ASIC software development would eventually move.  So I made a very difficult choice and moved into microprocessor software development.  Looking back now, this was the likely the best career choice I have ever made.

Practically overnight I was again “drinking from a fire hose.”   Rather than working with software, my former teammates and I had built from scratch, I was knee-deep in poorly-commented code that been abandoned by all but one of the original developers.  In about 9 months my co-developer and I had transformed this code into something that resembled properly-architected software.

Again, I saw the winds of change transforming my career environment: this time, microprocessor design.  Software development was moving from locally-integrated hardware/software design labs to a centralized software-design organization.  Seeing this shift, I moved within the company, to microprocessor hardware design.  Three and a half years later I see the pros and cons of this choice.  The largest pro is having about 5 times more opportunities in the industry — both within the company, and without.  The largest con, for me, is dramatically less software development work.  Hardware design still requires some software work, perhaps, 20-25%.  Much of this software design, however, is very task-specific.  When the task is complete — perhaps after a week or a month — it is obsolete.

A Passion for Software and Finance

While I was working, I spent some time in grad school. I took all the EE classes that related to VLSI and microprocessor design. The most interesting class was an open-ended research project. The project I chose, while related directly to microprocessor design, had a 50/50 mix of software design and circuit/device-physics research. I took over the software design work, and my partner took on most of the other work. The resulting paper was shortened and revised (with the help of our professor and third grad student) and accepted for presentation at the 2005 Society of Industrial and Applied Mathematics (SIAM) Conference in Stockholm, Sweden.  Unfortunately, none of us where able to attend due to conflicting professional commitments.

Having exhausted all “interesting” EE/ECE courses, I started taking grad school courses in finance.  CSU did not yet have a full-fledged MSBA in Financial Risk Management program, but it did offer a Graduate Certificate in Finance, which I earned.  Some research papers of note include “Above Board Methods of Hedging Company Stock Option Grants” and “Building an ‘Optimal’ Bond Portfolio including TIPS.”

Software development has been an interest of mine since I took a LOGO summer class in 5th grade.  It has been a passion of mine since I taught myself “C” in high school.  During my undergrad in EE, I took enough CS electives to earn a Minor in Computer Science along with my BSEE.   Almost all of my elective CS courses centered around algorithms and AI.   Unlike EE, which at times I found very challenging, I found CS courses easy and fun.  That said, I earned straight A’s in college, grad and undergrad, with one exception: I got a B- in International Marketing.  Go figure.

My interest in finance started early as well.  I had a paper route at the age of 12, and a bank account.  I learned about compound interest and was hooked.  With help from my Dad, and still 12 years old, I soon had a money market account and long-maturity zero-coupon bond.  My full-fledged passion for finance developed when I was issued my first big grant of company stock options.  I realized I knew quite a bit about stocks, bonds, CD’s and money market funds, but I knew practically nothing about options.  Learning about options was the primary reason I started studying Finance in grad school.  I was, however, soon to learn about CAPM and MPT, and portfolio construction and optimization.  Since then, trying to build the “perfect” portfolio has been a lingering fascination.

Gradually, I began to see flaws in MPT and the efficient-markets hypothesis (EMH).  Flaws that Markowitz acknowledged from the beginning!  [Amazing what you can learn from going beyond textbooks, and back to original sources.]   I read in some depth about the rise and demise of Long-Term Capital Management.  I read about high-frequency trading methods and algorithms.  I looked into how options can be integrated into long-term portfolio-building strategies.  And finally, I started researching the ever-evolving field of Post-Modern Portfolio Theory (PMPT.)

When I finally realized how I could integrate my software development skills, my computer science (AI) background, my graduate EE/ECE work and my financial background into a revolutionary software product, I was thunderstruck. I can and did build the alpha version of this product, HAL0, and it works even better than I expected.  If I can turn this product into a robust business, I can work on what I like, even what I love.  And that passion will be a strength rather than a “flaw”.   Send me an angel!