IE11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Commentary: 3 More Signs That an Agile Project Isn't Really Agile

To "divide and conquer" a development life cycle is not really what agile methodology is about. And green boxes aren't necessarily a sign of success.

In my last post, I presented the Department of Defense's Defense Industrial Base (DoD DIB) Guide: Detecting Agile BS, which revealed the first three signs that an agile program is not really agile, and asked how well are we doing in exploiting agile in the state? This week, the same question will be posed, considering the final three flags:
● Stakeholders (development, test, ops, security, contracting, end users, etc.) are acting more or less autonomously (e.g., "It’s not my job.")
● End users of the software are missing-in-action throughout the development
● DevSecOps culture is lacking if manual processes are tolerated when such processes can and should be automated.

One quick sidebar: I do not really like the fact that the DoD authors suggest that these flags identify BS. “BS” implies that folks are intentionally trying to delude you into thinking that they are agile when they are not. In reality, sometimes these are honest mistakes. Sometimes inexperienced staff do not realize the consequences of these mistakes. Sometimes it is just hard to get it right. So, walking through these next three points, let's think about getting it right the next time and not on calling “BS” on a project.

The first flag continues restating the points I tried to emphasize in the last post. There I focused on collaboration between IT and the Product Owner, the end user. This time the authors flag the fact that cooperation needs to occur across the board. Old IT organizations that slice the development life cycle into a set of roles and believe in “divide and conquer” do not get “agile.” Architects need to work daily with developers. They do not get to sit outside and impose constraints without concern for the impact on the product (with some often misplaced academic concern for the “enterprise”). Testers are not outside agents, either. Testing is an integrated part of the engineering process. Contracting is responsible for ensuring that contractors are providing a set of high-velocity developers and that velocity is sustained. It is not a one-and-done responsibility. End users are ultimately accountable for the quality of the deliverables, and they ensure quality through incremental acceptance. Agile works when the team has cohesion. It fails when anyone starts pointing their finger.

The second flag points out that end users must be consistently and regularly engaged with development.

One of the basics of agile is the idea that development is sliced into two-week time-boxed sprints and that the end users accept, or reject, the results of every sprint. This incremental acceptance is what ensures that the final project is acceptable. It is just not possible to conscientiously accept every intermediate result every two weeks and then reject the end result.

But there is another way to look at this. We are all familiar with the concept, if not the practice, of “fail fast.” But we think that if we used to run two-year-long development projects and now fail in six to nine months that this is failing fast. It is not. In agile, you pass or fail every two weeks, and if you fail you work to get it right in the next two weeks.

The final flag invokes a jumble of ideas. Let me focus on the most important idea.

Using modern software engineering practices, developers build user interfaces (UI) over small pieces of code, once called applets, and tie the code to the UI. All too often, they then test the application with manual testing patterns where people enter data into the UI and look for errors. This testing process is expensive and error-prone. The UI should contain no logic; the logic should reside in the small code pieces. Teams should unit-test and acceptance-test the pieces using automated testing mechanisms that are called each and every time code is checked into the code repository, and it should be checked in at least once a day. In this way, testing becomes automated. Further, automated testing frameworks should be embedded into a continuous integration framework so it can be seen correctly which lines of code are tested and which are not. As a rule, over 90 percent of all code should pass through an automated unit test, and 70 percent of all code should pass through automated acceptance tests.

Further, the results of this process measuring how many lines of code are checked in each day and how many lines of code passed the automated testing regimen should be transparently reported. This ensures that everyone knows the status of the development process at the end of each check-in. Developers cannot hide hard problems from their managers, and managers cannot hide problems from the state. Status is automatically reported and not the result of a manual process that has a bias toward green boxes.

There is much more to DevSecOps than this, but ensuring only that every new software program uses a modern development environment with automated testing and test profiling, and combines this with the tracking of end-user acceptance, will limit severe software program failures (although it might not stop cost overruns; more on that in another post).

I know that several state agile development programs would not pass the tests implied by the DoD paper. I know of some that would not get half of the flags right. I may mention these now and again in subsequent posts.

The thought I would like to leave you with is that agile development is not about “development” only. It is not a method for programmers. It is not a method for IT. It is a collaborative process that requires the end users to engage with IT as peers. The end users share the risk and make programs successful together with IT. This is a significant change. Agile methods are naturally transparent if you set them up correctly. Agile projects are incremental in a manner that allows for failing small and fast and correcting as you go. Agile methods have been developed in reaction to lessons learned from the many failed waterfall programs we all know and love.

Finally, it is just crazy to criticize agile projects that implement agile poorly. It is crazier still to nostalgically imagine that, after all the failures, if we just execute waterfall better, we will be better off.

IT veteran Rob Klopp's background includes having served as CIO of the Social Security Administration during the Obama presidency. He writes opinion pieces periodically for Techwire and blogs at ciofog.blog. The views expressed here are his own.