How I know a project plan is total nutbars… and how it can be fixed
We have all seen the magical project plans that have no grounding in reality. Schedules are far too aggressive, scope is beyond what the team can handle, not enough resources available to properly run the team… all to meet some magical “hard deadline” that has been imposed seemingly without any reason.
The folks in charge of these plans are not evil – they may just have somebody enforcing a deadline on them and are trying every possible thing to draw a picture to meet that deadline so they don’t get fired. These people are members of our team and we cannot leave them struggling alone. As architects and technical subject matter experts, we need to help our team members make their complete nutbar of a project plan into something that makes sense in the real world we live in.
#1. Integration timelines are ridiculously short
If I look at a project plan and it includes a complex integration delivered in under 6 months, I know this project plan is completely bonkers. Let us be clear, by complex I mean pulling together one or multiple systems for real-time integration for both read and write operations, possibly involving data synchronization across multiple data stores, and likely a level of security being applied.
Why this is nutbars:
There are a host of reasons why a specific integration project may not be feasible in a short timeline, but here are the ones I find hold true all the time:
Vendor API capabilities are insufficient. In every integration project I have worked on in my career, the system you are integrating with NEVER has the integration capabilities needed to meet the requirements of the business. Either the API is too limited, or there is no API, or the API only works one direction… any number of limitations crop up and now you have to get the vendor on the line and start customizing the system. So now you have two project plans, one of them dependent on the other. Your timeline is out the window…
- You do not know what you think you know. If you are actually dealing with a complex system integration, the amount you do not know about what could go wrong far exceeds the amount you do know. This means any estimates you’ve made on how long it will take has a HUGE range of uncertainty.
What to do:
If you want to pull multiple systems together and will need to involve multiple vendors, it is better to plan for delivery in stages. Work with one vendor primarily to get them out of the way and then queue up the other vendor. Trying to manage both at the same time will just increase your risk factor and blow out your timeline. Ideally, start with the most complex system integration first. That way if things go really wrong you can start pulling in the other vendors earlier while you’re waiting on the initial vendor.
One trick that can also be used is to do multiple launches as systems become fully integrated. So long as the end user has the feeling of the integrated experience, you can hide some of the back-end integration work with manual effort. For example, you can have the application send an email to a back-end system administrator to enter data manually rather than directly integrate to the back-end. This will meet your launch requirements, but still allows you to roll out the real integrations as needed so your sys admin doesn’t quit on you. 😉
Regarding your estimates, this is what spikes are for. For the initial project plan, you won’t have the details, so assume the worst. If your velocity is better than expected, you’ll be able to deliver more. However, if the spikes prove that the worst is indeed true, you’ll have the timeline to deal with the vendors or customize the system yourself.
#2. No time for a deployment phase
I’m sure you’ve seen this one. A project plan that has the development working right up to the launch date. I’m all for continuous deployment and all, but you can’t write up a project plan that has more than 4 weeks of development effort in it and then slot in a few days to launch it.
Why this is nutbars:
- Deployment takes time. Organizing the identification of requirements for an external vendor, having it built, having it integrated into development environments, deploying all the dependent pieces to production… this all takes time. If you already have some existing system up that is being replaced, you have to take that into consideration for the cutover as well.
- Is somebody testing this? Sure, you might have some testing built into your iterations, but most system integrations or transactional business flows have very specific stakeholders that are not involved in the day-to-day project team. These folks likely aren’t involved in the acceptance until later in the project.
- Continuous Deployment has overhead. If you do want to cut out your deployment phase to accelerate your timeline, you won’t. You’ll have to put in time into your development iterations to develop a deployment pipeline, unit tests, regression tests, and likely a method to disable features if they prove too unstable. It all takes time…
What to do:
In order to properly plan for the inevitable deployment and infrastructure issues, there should be a chunk of time (one or two iterations) at the end of the project plan for getting all the production environments up and cutover. Additionally, there should be a few iterations at the beginning of your project plan to set up the foundational elements and pull together development environments. This should allow you to build a fairly complex application and have the time to get it stable and ready for launch.
Assume a UAT component as part of your deployment phase, even if you have testing built into each iteration. At the very least, you will use this as a training and messaging exercise to get buy-in from stakeholders. Usually, this is the time when you will bring in the experts and stakeholders that really know how it is on the front-lines and they will get you the final feedback on what has been built.
If you want to skip all this and go to a Continuous Deployment model, you can severely cut out the deployment phase and assume many small deployments in your project plan. When calculating the overall timeline, assume that your team velocity will be lower than normal due to the overhead of automated testing and deployments that need to be built and maintained throughout the project.
#3. Magical overlapping teams
I recently saw a joke on Twitter that stated “9 people can’t make a baby in a month”. This is, of course, a reference to the common method for resolving timeline issues by just increasing the number of people or teams working at the same time. The theory being that if there is 8 months of work for the team, we can get two teams to do it in 4 months.
Why this is nutbars:
The most common mistake here is to forget about the overhead of managing multiple streams and ensuring the packages from all the teams are able to come together at the right times and work together seamlessly. Creating teams to handle specific elements of a solution is not uncommon, but it comes at an overhead. For every person or team you add, you are also adding people to manage those individuals and bring their ideas and solutions together. Somebody to manage the people who are managing those people will also be needed.
Try to hold a 30-person stand-up every morning. It doesn’t work.
If you add more people, you do not decrease overall delivery time by an equivalent factor.
What to do:
The solution here is fairly simple. If you do see that the math in your project plan is assuming multiple teams delivering at exactly the same rate as one team over a longer time, something is wrong. Start adding buffer for the inevitable revisions as teams come together. The easiest way to do this is to build in iterations at regular intervals that are focused on bringing together the multiple teams to make an integrated solution available. This will force regular check-ins between the teams and ensure that none of the teams go too far off the mark before receiving feedback from each other.
If you’re feeling adventurous, these might make good candidates for interval releases!
#4. Assuming nothing will go wrong
If a project plan doesn’t have an iteration or two for stabilizing the release and dealing with technical debt, then the project is doomed to fail. I have never seen a project of any significant complexity pull off the fabled Zero Bug state.
Why this is nutbars:
During the project, you will address priority issues on stories as they occur in the iteration, but lower priority issues will begin to build up in your backlog. In addition, security testing, load testing, accessibility testing, and all sorts of acceptance tests may find additional issues that will be added to your backlog. These are things that may not block an iteration or story, but will definitely stop the launch from happening.
What to do:
This is when you need to have a stabilization iteration (or two). Give the team some time to receive feedback and fix issues prior to launch. The amount of time and effort spent on this should be relative to the size of the project you are working on. If the team has only built a single iteration of effort, then you will not need a full iteration of stabilization effort.
Deploying more often will also help with this, as you will have less chance to build up a load of technical debt. By planning for multiple releases, the amount of time spent stabilizing each release will be less, thereby reducing the risk to your timeline.
Sanity Check
Make sure that a project plan is reviewed by the team before presenting it to management. Make sure everyone gets a chance to provide feedback on what might be missing, or what might be too aggressive. Setting the expectations too aggressively might temporarily please management, but when you do not deliver on time all credibility you had will be lost.