IE11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Commentary: A Prescription for Modernizing Medi-Cal

The current Medi-Cal IT infrastructure is over 30 years old. It follows the same storyline you read again and again around government IT modernization: The legacy system is inflexible and difficult to extend; it runs on costly, aging infrastructure; and the notion of modernization feels big, risky and expensive.

rob-klopp.jpg
When my appointment as the CIO of the Social Security Administration ended, I was invited to help the Department of Health Care Services (DHCS) develop a digital services capability and to build an IT modernization plan for the Medicaid Management Information System (CA-MMIS) group. This post summarizes several of the suggestions I made about how to approach modernizing the Medi-Cal systems.

The current Medi-Cal IT infrastructure is over 30 years old. It follows the same storyline you read again and again around government IT modernization: The legacy system is inflexible and difficult to extend; it runs on costly, aging infrastructure; and the notion of modernization feels big, risky and expensive.

When I started at CA-MMIS, I asked several executives what they thought the modernization of Medi-Cal might cost, and estimates ranged from $700 million to $1 billion. Since in the Silicon Valley there is no such thing as a $100 million software development startup, I do not buy these numbers.

At the 60,000-foot level, the MMIS problem can be broken into several components: claim ingest; eligibility check; claim determination; and reimbursement payment and accounting.

Claim ingest involves receipt of a claim request from a provider in a standard format defined by the Centers for Medicare and Medicaid Services (CMS). Necessary edits are performed to ensure the claim is complete. If the edits and business rules that validate claim requests are known, the building of these websites is easy using modern development tools. As noted in my previous columns regarding the cloud and the DMV, the volume of claims to be processed is not significant compared to the volumes managed by modern Web applications. There is no technical risk here.

Eligibility checking is more complicated than the ingest edits. The claimant’s information and history must be assessed to determine if they are eligible for this benefit associated with the claim. Still, if the business rules are well understood, there is no technical risk in having those rules encoded in a program or invoked using a rules engine. There is no technical risk in deploying these rules at scale in the cloud.

Claim determination is the process of calculating how much of the claim is covered for this beneficiary. Determination requires another set of complex rules to be known and encoded.

Finally, claims to be paid must be aggregated by provider and checks — or, better still, electronic payments must be cut to reimburse the providers. The claims must be further aggregated so that the state can request reimbursement from the federal government. This process requires a relatively simple cost allocation process.

Note that CMS has published standards for how these components should interact as an architectural template, so building components to those standards would naturally result in modules that would integrate.

In another post, I may walk you through a back-of-the-napkin estimate for developing these components. For now let’s say that I believe it to be around $50 million: developed, tested, and deployed as a minimum viable product. Let’s estimate a 20 percent per year cost for ongoing support, for $10 million per year, and another 10 percent, or $5 million, for ongoing operations. Note that the 20 percent number is what the state would conventionally pay for support for a commercial, off-the-shelf (COTS) product, so this is conventional. The $5 million per year for operations may be too high, as the costs of running this in the cloud will be small, and $4 million would employ a large staff of operations personnel.

Unfortunately, it is just not this easy, and here's the rub: The business rules that drive each of these components are not well understood. The rules are lost in a puzzle of 40-year-old COBOL and Assembler language programs. They are embedded in an arcane utility built by the state 40 years ago with no commercial equivalent. They are lost in the first .Net program ever built by one of the state’s vendors where they tried to see how many of the new .Net facilities could be used just to see how they work. Then, this COBOL plus Assembler plus Utility mishmash passed through 40 years of extension and maintenance. To date, the code has defeated every attempt to unravel it.

What happens next is classic. The state wants to avoid risk, so they ask for a fixed bid from the vendor community. The vendors want the business if it is profitable, so they take the $50 million and double it to reduce risk. They realize there is no way for them to know how long it may take to uncover all of the rules. So they double the estimate again, and a $40 million project becomes a $200 million project. Then they add 20 percent for support and 10 percent for operations, and they end up with a $260 million bid (more if they do not effectively use the cloud and if they are responsible for running the legacy while they develop the new system).

Here is a way out:

First, start a project to discover and catalog every business rule. Take out the risk and push the cost back toward the $40 million mark. Since there is no way to proceed under any approach without having the rules at some point, this is a required step that has to be completed. In other words, the project to find the rules should not put the risk on the vendor. It should be for time and materials.

The state should not start this project until a vendor with experience in rules extraction (not with a silver-bullet rules extraction software product; this has been tried) provides a comprehensive approach that seems feasible. The state must then manage the vendor and have progress demonstrated in reasonable increments.

Once the ingest rules are captured, the state can solicit bids to build the ingest component, or they can wait for the entire rule set to be cataloged and validated by the MMIS business side, and then solicit a single vendor to provide the entire application.

Note that there have been significant advances in the technologies associated with machine learning in the last several years. While sometimes machines learn using neural networks where the rules are perfectly captured but not well understood, there are also advances that train a decision tree where rules are explicit. I would hope that the state would see several very innovative approaches suggested.

Maybe the state could get the universities involved in a special program here? In other words, the problem of capturing rules from the millions of claims as inputs and the resulting outcomes seems ripe for a wacky idea or two. It is feasible to expect results from this sort of endeavor.

The original estimates from the CA-MMIS execs are historically accurate. They drew from their experience when they suggested very high numbers. As I continue to suggest in these posts on modernization, modern software development tools significantly reduce the time required to implement new code. Modern software architecture techniques reduce the cost of orchestrating and integrating components. The cloud reduces infrastructure costs by 10X or more, and it reduces development cost by providing easy scalability. Modern continuous integration (CI) tools dramatically improve code quality. Very modern machine learning tools can learn the business rules from a set of historical transactions.

Finally, the state needs to see that pushing all of the risks onto the vendors does nothing but drive up costs. Vendors take the risk at a price and can still miss the targets. Vendors need to make reasonable, not exorbitant, margins. When they have to account for the risk of bidding on a project where the business rules are not known, the risk and the cost goes way up.

The state needs to try new approaches that accurately measure progress rather than outcomes. The state should hold vendors accountable for progress and have faith that progress in the right direction ends in substantial outcomes. There are ways to accomplish this, and I will consider these in the next few posts. 

IT veteran Rob Klopp's background includes having served as CIO of the Social Security Administration during the Obama presidency. He writes opinion pieces periodically for Techwire and blogs at CIOFog.blog. The views expressed here are his own.

IT veteran Rob Klopp's background includes having served as CIO of the Social Security Administration during the Obama presidency. He writes opinion pieces periodically for Techwire and blogs at ciofog.blog. The views expressed here are his own.