We’ve had many clients approach us recently to help shore-up their application mapping (defined relationships between the infrastructure components necessary to deliver that service).Honestly, these maps should be treated like a captain’s navigation charts while at sea. If you encounter a storm, they should be at the ready, up to date, and extremely accurate.
There are a few reasons why this is a hot topic right now.
- First, as clients prepare to move applications to a 3rd party host or would like to take advantage of, say a scalable database service (hybrid App), then they need to know the current app topology and dependencies. E.g. you can’t move something if you don’t know all the pieces and dependencies. In this scenario, we help clients understand the business outcome of moving an app or service to a cloud provider (please don’t move something because of cloud hype or “it’s cheaper” – but more on the outcome in a bit).
- Second, clients want to map their app topology to understand potential change impacts and associated service degradation risk (this helps avoid failure from deployed change).
- Third, when an incident does lead to an impact or outage, the teams can leverage app impact information to track the root cause from “tops down” (reducing MTTR). This third scenario does require a coupling of application performance and availability monitoring as part of the solution.
- Client environments have grown significantly, leading to unmanageable levels of complexity as development is taking place in a world of varied resource models (bare-metal, virtual, private, and public resources). Clients are building maps to bring back operations control and provide a better understanding of their application dependencies.
- Lastly, the software capability to help map application topologies is getting much more efficient. However, don’t believe that everything can be mapped with a click of a button or that you’ll be able to “set-it and forget-it”. Application maps are only as good as their underlying data sourced from the CMDB (which also requires automation built into the solution architecture to maintain CMDB data integrity).
The first four reasons should all translate to providing a better end-user experience by increasing up-time, reducing latency, mitigating data loss, and finding areas of opportunity for enhanced app or business service functionality. Customers expect a convenient experience when they interact with technology – downtime is unacceptable and performance degradation is highly annoying. Take control, mitigate against these risks, don’t fear the storms.