This is going to upset a lot of consultants, but I think Agile/DevOps maturity models are ridiculous. Most of them are setup to test if you look like a good Agile team, rather than checking what really matters.
I typically see questions like; “Is your team 7 +/- 2?”, “Do you run daily standups?”, “What’s your unit test coverage?”, “What percentage of your tests are automated?”, “To what extent do you depend on other teams?”. These aren’t bad questions, and they may well give you opportunities for improvement based on best practices and an indication of how agile you are. However, shouldn’t agility be measured in agility? Put simply, “how quickly can you confidently get functionality to actual users in production?” That’s it, that’s the only question. So here are the 5 levels of maturity I see and their answers to that question:
Level 0 – “Never, because I have no confidence that what I put In To production will work.”
A lot of people mistakenly believe that removing testing and going straight to production is “Agile”… it’s not. If you do one good drop to production per year, you’re more agile than someone customers can’t depend on. This does NOT include prototypes or PoCs that you run with customers, those are a fantastic way to gain true views of requirements, but customers need to know what they can count on.
Level 1 – “At the end of the next project, which might be a month or a year away.”
Most teams in most companies of any size fit in to this category. They often run their dev teams in an agile fashion doing sprints/demos/standups, but wait until a large number of sprints are complete to test with other applications and finally deploy to production. These releases are often at the end of projects and involve lengthy code freezes.
Level 2 – “Our next release train reaches production on X date, we can have it in that.”
Now you’re getting to a point where an individual team, without relying on feedback from other teams, can promise a user a specific piece of functionality at a specific time. The key here is that the trains run to production regardless of the number of features that go in a particular release. If I’m working on a new component and it’s not done yet, my release might be exceptionally small. However, since I don’t have to wait for the project to complete to release, I can still ship some functionality out to customers regularly. There still may be a lot of testing at the end of the release train, but the key is that it’s regular.
Level 3 – “If it’s high enough priority, we can take the next sprint to prod. Please leave X days for the sign-offs, production provisioning, and deployment.”
This could be achieved while the organization as a whole is still promoting level 2 and, consequently, you don’t go to production every sprint. However, the key here is that you should have full confidence that your application is ready for production when the demo is over. Being able to go to production at the end of the sprint implies two big improvements from Level 2. First, you have the ability within the bounds of a sprint to test your code sufficiently against other applications that you interface with. Second, you have the ability to either complete all functionality within a sprint or to hide functionality that is not complete.
Level 4 – “I can fit it in this sprint, and every sprint goes to production right after demo.”
To be clear here, I mean RIGHT after the sprint. I mean that once you tag a sprint, it just drops in to production. The big change here is less with Dev and more with Ops. To make this possible the hard to reach places of Ops have to be automated (and ideally provided by infrastructure as code). This can’t be, “I go to production right after every sprint that doesn’t require changes to the database or a new component in production.”
Level 5 – “This afternoon. Every user story goes to production, and since this one is top priority, it is next.”
This also requires another level of maturity; all of the development for the application must now be committed to master at least once per story. Additionally, you need to have a way to deploy without downtime and a quick backout strategy (e.g. blue/green deploys).
The bottom line here is that measuring maturity this way means that we only measure outcomes and NOT the ways we get to them. Automated testing, automated deployments, Infrastructure as Code, test coverage, etc… are all great. You likely can’t reach levels 3, 4, or 5 without all of them. But you MIGHT be able to, and if that’s what works for your team… go for it.
Let me close with an example… If you run a dev team and after every user story check-in 100 interns login and manually regression test for 15 minutes and then one of them carries a floppy disk in to the datacenter and loads the changes in to production, then you have achieved level 5 (provided that you’ve found a way to make interns reliable and a floppy disk is ALWAYS sufficient for deploys). Most of the maturity models in the industry would score you as a zero because your team is too big and you don’t have automated test or deploy, but that’s just not true… you’ve delivered production value in 20 minutes! You’re far more agile than many teams I have seen with near 100% automated test coverage, but a 3 month wait before they get “approved” for production.
I recommend teams look at this maturity model and only talk about tools/techniques that either help them get from one maturity point to the next OR help them keep from falling back to zero.