Over the past few days, I have seen at least two posts or articles on a subject I’ve been meaning to write about for quite some time, now. The first one I saw was by Rob Howard, and although it deals mostly with software development, it struck a chord with me. Today, I saw a post on Kevin Stone’s blog which referenced a ComputerWorld article discussing a similar issue with network deployments.
One of the key ingredients to failed (or marginally successful) technology deployments, is scope creep. For a variety of reasons, people try to cram as much as possible into each deployment or product version, and often end up consuming vast amounts of time and money, with very little to show for it in many cases.
Having been personally involved in a fair number of technology deployments over the years, I have often considered some of the reasons for these failed deployments. In this article, I’m going to focus on the deployment of technology infrastructure or products, rather than on software development.
There are basically two ways you can look at a new deployment, especially when it involves a new product category. This is true whether we’re talking about server hardware, networking hardware or information security solutions (which commonly touch the other two areas). For the rest of this article, we will simply refer to these solutions as “the product” or “the solution“.
Approach #1: Get all your internal stakeholders together and identify all of their needs. Do some research on the product and the vendors, attempting to choose the vendor with the best long-term strategy that addresses your needs as you understand them to be at this time.
Approach #2: Get some key stakeholders together and identify two or three top concerns. Search for the product which most closely addresses these concerns, and has the absolute lowest cost. Pay little to no attention to the ability of this particular vendor to grow with your needs.
Common Obstacles with Approach #1
I usually refer to this approach as the deployment-to-end-all-deployments mindset, and here are some of the common obstacles inherent in this methodology:
- Insufficient information about the problem to be solved
- Deployment time and costs are large when the scope is large
- Potential integration issues grossly underestimated
- Potential obsolescence issues are ignored
Insufficient Information: You usually don’t have enough information about the problem that you are trying to resolve, or about the vendor, or sometimes about the product category. Even your key stakeholders don’t always really understand what would constitute a viable solution to some of the problems they have, because of the lack of any metrics or statistics to currently define the extent of the problem. Limited visibility = limited understanding.
Costs: The only assurance you can have concerning costs is that they will be higher than expected, to a degree that is highly proportionate to the scope of the project. The greater the scope, the greater the potential for cost overruns. But that’s not all… When you jump on a new product/solution bandwagon too early, and quickly align yourself with a vendor in order to take advantage of a 5-year vision, economies of scale, and one-stop-shopping, it is almost a guarantee that things will take much longer to plan and execute, and the stated benefits will have to be much greater in order to get everyone to buy into the vision.
Integration Issues: Every project has them, and like costs, they are proportionate with project scope — not only in number, but in impact. Sometimes, as happens often in the security space, the solution is so new, that all of the dynamics like scalability and integration with other products have not really been fleshed out as yet. Getting your arms wrapped around a smaller chunks of issues is always advisable.
Obsolescence Issues: While you’re busy tying up half your IT organization to implement your grand 5-year vision, the marketplace is changing, the product is changing, the vendor is changing, and your needs may be changing. It is not uncommon for significant changes to be introduced between version 1 and version 2 of a product which can either reduce the value of your investment or even make it obsolete outright. (Don’t assume that upgrading to version 2 will automatically be any easier than getting a new product altogether.)
Okay, so what can we do to mitigate these risks to successful and timely product deployment?
Approach #2: Start with simple functionality and low costs…
Instead of trying to get it all right in the first pass, based on limited info, my general recommendation is to go for a potential throw-away strategy. Look for the product that stands the best chance of solving one or two of your most critical issues in that space, while incurring as low a cost as humanly possible.
Note the use of the term “potential” in the above paragraph. The goal is not necessarily to get something that could not possibly be used long-term, but instead to focus on something that is inexpensive and solves today’s problems as quickly as possible. Sometimes that means that you will need another product within 12-18 months, but that is only a real problem if you spend too much money and too much time to deploy the initial product.
Here are some of the key advantages to this approach:
- Quick return on investment (ROI)
- Better visibility into the extent of the problems to be solved
- Better opportunity to assess real or expanded needs
- More leverage with vendors and pricing
ROI: The sooner you can get a solution in place, and the less you paid to deploy it, the sooner you will reap its benefits, and the faster you can build credibility with your business units (which will be needed for round two of the deployment cycle).
Better Visibility: This is key, because a lack of information increases project complexity and reduces project success. Conversely, having better visibility into the issues your environment faces will lead to better decisions about what functionality needs to be obtained during the next deployment round. This is directly tied to being able to better assess what you will need when you decide to pay out the big bucks, and ensuring that you’re solving the right problem.
Pricing Leverage: It almost never fails — you go out and purchase a complex, expensive solution today, and in six months, the pricing has been adjusted due to market pressures such that it now costs 40-60% less to get into the game. This is almost entirely mitigated by using approach #2, because you are reaping the clear benefits of a low-cost solution, while having the time to properly evaluate which items on the vendor checklist really mean anything for you and your organization. You’d be amazed at how hollow marketing can sound when you have even a basic tool in place, vs. how alluring it can sound when you have nothing in place and are dying for any solution.
Putting It Into Practice…
So, this all sounds great in theory, but how does it get implemented in practice?
Let’s use an operational example, focusing on a well-established product set: Infrastructure Monitoring. In our scenario, you find yourself needing to deploy a tool or family of tools to monitor your technology infrastructure availability in a new organization, or one that has grown to a mission critical level. Because there are many potential needs, such as Uptime, End-User Performance Experience, and Server Resource Utilization across many platforms, this could quickly become a huge project.
Using approach #1, you would probably start some dialog with your larger vendors to purchase and deploy a product like IBM Tivoli, CA UniCenter, HP OpenView or BMC Patrol. This would take a great deal of time, and require lots of buy-in across the organization because of impact and cost. And we’re talking about a well-established set of products. Imagine the complexity when dealing with a nascent market!
If you don’t have anything in place as yet, approach #2 would look more like this: Find a low cost solution (or even two solutions) that will get you basic uptime visibility on as many platforms as possible, as quickly as possible. For example, there are plenty of Open-Source options that will cover Linux, Solaris, Windows and OS X, and could be deployed in days or even weeks rather than months.
Now, with the initial product(s) in place, you would start to get information about your technology environment which might indicate that your next focus should be application monitoring, or end-to-end user performance. Rather than guessing, you would have reliable data with which to base subsequent decisions. You and your team would also get some experience with using that category of product, which would provide key insights into what you would need/want to pay for, and where a commercial vendor would have to add value. And your deployment experience on this small scale would go a long way to minimizing integration headaches on the subsequent, larger project.
Meanwhile, getting a quick win means that you should be able to count on more support from the business users during the next round of deployments, and that your attempts to minimize scope creep will be met with less resistance than might otherwise occur.
Finally, you will occasionally find that by going with a smaller player up front, you will have more leverage and influence in the future direction of the solution, so that what started out as being just a throw-away might end up incorporating what you need for long-term growth.
Over the years, I have found this approach to be very successful, especially when your initial deployment will involve a new point solution rather than a full-blown framework or suite of products. Get it deployed quickly, start reaping benefits as soon as possible, get better data to make better subsequent decisions, and reassess the overall landscape before making the big commitment for the extended vision.
It will increase the odds of success for all future efforts, and you will very likely spend far less on the “throw-away” solution than you would on the first year of a complex, multi-year deployment.