( SlideShare.net )

The Department of Defense has now made public a nice document titled “DIB Guide: Detecting Agile BS” to “… provide guidance to DoD program executives and acquisition professionals on how to detect software projects that are really using agile development versus those that are simply waterfall or spiral development in agile clothing. …”

First, think about agile projects in the state today. Ask yourself whether the real end users — the staff who use the code day in and day out — are engaged in the development every week. If not, you know where the problem lies. Ask yourself if the project has fooled itself into thinking those subject matter experts who work for IT, not for the real end users, are sufficient. Consider whether user feedback is detailed and continuous. Ask if the product has a minimum viable product defined that can deliver value ASAP, within six months, rather than when all of the wish list is satisfied.

Second, use this DoD paper to think about agile in the state in general. Several months ago, I was told that the California Department of Technology wanted to de-emphasize agile. If true, this may be a result of mistaking agile BS for real agile methods.

The Defense Innovation Board's paper calls out six flags that identify a project that is not really agile, and I will riff on the first three of these points in this post.

Point 1: Nobody on the software development team is talking with or observing the users of the software in action; we mean the actual users of the actual code. (The Program Executive Office does not count as an actual user, nor does the commanding officer, unless she uses the code.)

IMO, this is the single most significant cause of agile project failure. Successful agile projects/products have an engaged product owner who is the real user of the finished product.

At the Social Security Administration, we had a tough time with our first substantial product development ($85M to complete; there is no such thing as a $100M software development project), as the real end users were state, not federal, employees. This agile criterion required the development of a steering committee, with the state executive owners attending six to 12 times a year. These were real meetings where the results of our recent agile sprints were demonstrated  for review and comment. They were not PowerPoint sleeping pills with lots of green boxes that declared all was well. The states provided a dozen or so real subject matter experts (SMEs) who participated in weekly sprint reviews. A team of proxies was created to represent  the end users, day in and day out, under the guidance of the real SMEs. Developing proxies was a risky proposition, as proxies can be the first step to violating the first point, but over time the real product owners learned to trust that the proxies represented them and did not kowtow to IT.

I know that the state has programs where the end users — the real product owners — are the counties or contracted organizations. I know that the real end users of DMV applications are operational staff. I am less sure that the state IT staff engages with these folks as the owners, and I suspect that sometimes they barely engage at all.

The point of this point is that software development is a collaboration between the product owners, who are not IT; the development team, who may be contractors; and the product managers, who are responsible to the entity which acquired the funding for the program and may be IT staff.

This point is so critical that I would not allow a project to start without certainty that product ownership was properly in place. Further, I would stop any project where the product manager or development team lost the support of the product owner.

Point 2: Continuous feedback from users to the development team (bug reports, users’ assessments) is not available. Talking once at the beginning of a program to verify requirements doesn’t count!

This idea is a continuation of the first point. Once product ownership is established, the owner provides ongoing input. This input should come weekly, at least. The developers and the users form a team that collaborates to build a system that is acceptable at every point along the way. The DoD authors point out that capturing requirements once and up front is a mistake. I believe that even more critical is capturing acceptance every two weeks with each sprint. In the end, there is no need for a substantial formal acceptance test. If the system is accepted every two weeks along the way, there is no way to end up with an unacceptable product.

Again, I imagine that there are state projects where the development staff rarely, or never, collaborated with county or outside outsourced nonprofit staff. They may have held occasional status meetings, but providing status is not collaboration.

Point 3: Meeting requirements is treated as more important than getting something useful into the field as quickly as possible.

This simple one-sentence point is loaded with nuance and complication. One aspect that is counter-intuitive is that “something useful” is possible that is less developed than the (full) requirements.

The idea is to think hard about how to deconstruct the software problem into useful things that can be put into production ASAP. In the next post, I will discuss the concept of a minimum viable product and consider a different way to think about defining increments to get to productive use sooner.

Another aspect of Point 3 hints that meeting big “enterprise” requirements should not take precedence over meeting smaller real business requirements. The enterprise’s job is to support business functions as effectively as possible ASAP. You could say that the enterprise is the sum of all of the business functions to be supported. Point 3 suggests that we must solve the functional business problems as quickly as possible and not let every requirement be a top priority.

Here is a simple test of this third point: If the product plan does not deploy something into production, into real-life production, within six to 12 months of the start of development, then something may be wrong. However, please do not lose the idea that deployment into production within two months is better.