Following is the second in a periodic series of columns that IT veteran Rob Klopp is writing for Techwire. The views are his own.  

In the previous post, I introduced the idea that today, technological change occurs at such a rapid pace that we have to build or buy systems designed from the bottom up to change. As we work through this idea in subsequent posts, I’ll show how this can be easily accomplished. In this post, I will tell a story about how legacy software inhibits change and how modern software, deployed in the cloud, supports it.

The story will be familiar to many readers: We build an application and deploy it on a server. Over time, workload increases to the point where performance becomes unacceptable. Maybe we can purchase a bigger server or perhaps not. So, we take our smartest technical staff and put them to work tuning the application.

Sometimes tuning is just good engineering that could or should have been done in the first place. However, often tuning involves counterproductive acts. We might take a subroutine that is written in a high-level language and rewrite it in a more efficient low-level language, solving the performance problem but making the application less maintainable. We might take a flexible, normalized database and de-normalize it, improving performance but making future extensions to the database problematic. We might take executable code or data and move them from generalized to specialized hardware, reducing response time but also making the system dependent on these specialized devices. In other words, we often tune applications by making them less capable of supporting new requirements. Over time, repeated tuning exercises create a fragile system that is difficult and expensive to change. I suspect that you know of examples of these highly tuned, impenetrable legacy apps.

Today, sound software engineering as practiced in Silicon Valley results in applications that use cloud computing to scale up and down with demand. Applications run efficiently across multiple servers, and when more capacity is required, the system automatically distributes the applications across more servers. This automatic scaling is called “elasticity.” Better still, it is easy to develop these scalable apps using modern tools and practices. This scaling provides a means to deliver performance without tuning.

I am not suggesting that Silicon Valley software engineers do not tune. Of course they do. However, they tune applications running over thousands of servers with millions of terabytes of data. They would not expend the human resources to tune an application if scaling up to four cloudy servers at the cost of another $8 per hour for a few hours during peak load would solve the problem.

The components of a scalable application are described in the Twelve-Factor App manifesto. I believe that every piece of new software developed should be designed with these factors in mind, and when an element is skipped, there should be an explicit, written justification. Requiring the Twelve Factors is a policy CIOs should consider putting in place in every department. More important, every software engineer should be aware of the Twelve-Factor App guidelines and strive to follow them. They represent modern software engineering best practices.

There are several takeaways from this narrative.

First, we all need to understand how optimizing for performance impacts the ability to change an application. I will continue to argue that modern applications need to be optimized for change, not for performance.

Next, we need to understand how the ready access to compute in the cloud enables the development of high-performance applications optimized for change with little extra effort.

There is also a key point to be left for a future post. It is critical to understand that moving a legacy application to the cloud is not sufficient to take advantage of cloud computing. We need to buy or build cloud-native applications, using the Twelve Factors as a guide, to make optimal use of the cloud.

In the next posts, I will discuss how cloud computing and scalability impact the modernization of crucial state systems. I will suggest that easy-to-build cloud-native applications reduce or remove the technical risk we usually associate with building and deploying state government services. I will remind the readers that no Silicon Valley startup gets $100 million to develop software systems and that companies like Facebook go public with 1 billion users after spending a fraction of $100 million on development.

In the past, the software problems the of the state were so massive that we required heroic effort to develop and deploy them. Today, distributed computing in the cloud makes them tractable. We should not be afraid to start the modernization process. 

Rob Klopp describes himself as "a full stack executive with knowledge from business applications down to the bare metal. Experienced in executive management, finance, sales, marketing, and product with a bias towards innovative product strategies. Built new organizations from scratch, transformed very large old organizations, and provided technological thought-leadership over a 35-year career." He blogs at