Email this Article Email   

CHIPS Articles: Tool Time

Tool Time
Tool Selection, Automation and Integration
By Mukesh Barot, Navy IT Service Management Office (NAVITSMO) and Phil Withers, NAVITSMO Contractor Support Staff - July-September 2015
If you’ve worked with information technology for any length of time, you are well aware of the complexity of THE INFRASTRUCTURE — that shadowy leviathan containing all of the enterprise services, processes, systems, components, and configuration items (CIs), to name a very few, that comprise the IT landscape.

THE INFRASTRUCTURE provides the necessary collective processing power we need to do… whatever it is we do. Viewing the Defense Department infrastructure from 10,000 feet up, we can arguably scope our collective activities within the DoD as putting the digital spear into the hands of the warfighter; in other words, what we as DoD/DON IT professionals "do" is to enable and elevate information dominance as a warfighting discipline no less potent than air warfare, surface warfare, or submarine warfare.

Indeed, Information Dominance is now the singular indispensable strategic capability upon which all other warfare domains and their concomitant warfighting doctrine are based. The DoD infrastructure resides within an architected enterprise that increasingly employs analysis, design, planning and implementation tactics using a holistic approach to successfully develop and execute information dominance strategy while simultaneously denying this capability to our adversaries via the Joint Information Environment (JIE).

Tightening the aperture of our focus a bit, we know that our ability to manage any one segment of this complicated and interrelated enterprise requires a reliance on tools that are (hopefully) specific to the task.

All of us rely on tools to one extent or another, but few of us have ever considered just how it is that the tool we’re using was selected from literally hundreds of other tools that may also be suited for the same purpose. Was there a defined process by which the tool was selected, or (gulp) was it “sold” to us as the panacea for all our enterprise management requirements only to discover later that it also came with an undocumented feature: buyer’s remorse?

To help alleviate the potential for wasted effort and dollars, we look to our old IT Service Management (ITSM) friends, international standards and industry best practice. As with all service design activities, tools — to have true utility to the specific task for which they are commissioned — must be considered early in the service design phase of the IT service lifecycle.

We must always be vigilant to preserve the integrity and inviolability of the live environment. Service operations are not the place to ‘try out’ new things, nor are they the proper venue for research and development. Actually, this consideration occurs even earlier in the service strategy phase when we stop to realize that the tools we use, regardless of what they do for us, exist to support business or mission strategy and/or enforce business rules that are themselves formulated from strategic policy.

If the tool does anything other than support corporate strategy as its primary objective, it should logically be classified as a toy and is, for all intents and purposes, a waste of resources. Period.

Simply stated, the tool must show positive linkage between its instantiation (where and in what configuration) in the enterprise and at least one strategic goal or objective — the more the better.

For the sake of illustration, we’ll use ITIL as our framework, although there are many other models for tool development and selection from which to choose; however, many information dominance professionals have been exposed to the ITIL lifecycle through Foundations training and are at least familiar with the lifecycle approach.

Assuming you are going to design a tool in concert with a new or changed service, getting ahead of the tool implementation curve means starting at the beginning — with the Service Design Package (SDP). The SDP is the authoritative document that is developed as an output of the Service Design phase of the ITIL lifecycle that contains all Service Level Requirements (SLR) expressed and validated by the customer, and clears each of the following hurdles (including the funding for each step) to ensure it meets enterprise architecture specifications:

  • Functional Requirements — tool requirements that may need to be specified in greater detail. Note that this functional analysis may be reviewed using the DOTMLPF (doctrine, organization, training, materiel, leadership and education, personnel, and facilities) framework.
  • Information security requirements.
  • Compliance requirements — Statute, regulations, directives, instructions, et al.
  • Architectural constraints — Scoping.
  • Interface requirements — Does the tool need to communicate with other systems?
  • Migration requirements — Does data need to be migrated from an existing tool or database?
  • Operational requirements — Does the tool require backup and restore mechanisms and/or require compatibility with existing system monitoring tools?
  • Required access rights
  • — Which users or user groups will require access to the tool and at what level?

Now back to reality: designing the tool is a luxury we may not (and probably will not) be afforded with extant services in operation (in the live environment) throughout the enterprise. Notwithstanding, the preceding SDP review areas serve as a fairly comprehensive checklist against which to compare the capabilities of any third party tool that is being considered as a “bolt-on” to an existing enterprise service.

However, it must be understood that bolt-on is always more expensive in the long run than baked-in. That’s why, for instance, the criticality of information security is resident within the ITIL Service D-E-S-I-G-N phase of the lifecycle. Leaving INFOSEC to be considered only during Service Operation introduces an order of magnitude increase in risk to the enterprise. The concomitant risk mitigation activities required in Service Operation are vastly more expensive to integrate than when they are considered and mitigated (identified, abated, avoided, shared, reduced, transferred or accepted) in the design phase.

More reality: many times our tool selection is based on what everyone else is using and not necessarily on what is best for our particular need or environment. Some of the factors that influence this behavior are simple economics – there is already an abundance of licenses for a tool or toolset that has been over-bought by our organization or some higher echelon in our chain of command. Tragically, this selection method shoehorns the tool into a tortured use case from which we get little value and many times we could do better by reverting to manual data collection and analysis when analysis man-hours are weighed against tool licensing costs in a purely value-centric or cost-benefit tradeoff comparison over the life of the software license.

When shopping around for the tool that uniquely meets the requirements you have already defined, (…You have defined your requirements, right?) you can easily segregate and begin your search within a few broad categories or core technologies:

Configuration Management — When IT Service Management aficionados throw around phrases like “automated infrastructure,” “infrastructure as code,” and “programmable infrastructure,” they’re talking about configuration management. That’s the tracking and controlling of changes to the enterprise and its discrete components, all those configuration items or CIs you learned about in ITIL Foundations class, and the archiving of all file versions into a central configuration management database (CMDB), which enables multiple developers to view and work on the same data while avoiding version-control issues.

Application Deployment — Application deployment tools enable the automation of releases: release packages, release units, patches, et al, and are at the heart of integrated Service Transition for assured service delivery, one of the primary tenets of ITSM.

Monitoring — Any sufficiently large IT enterprise usually requires two distinct types of monitoring: Application performance monitoring tools and INFOSEC continuous monitoring tools for Computer Network Defense (CND). At the infrastructure level, server monitoring tools provide visibility into capacity, memory, and CPU consumption so reliability engineers can fix issues as soon as they appear. The key is make sure that the right people can see the data so they can make better decisions.

Version Control — To achieve the benefits of automation, it’s essential to assign version control not to just application code, but also the physical infrastructure components, their configurations, and databases. This requires scripting all source artifacts, but the payoff should be a single source of truth for IT systems and databases, allowing for quick identification when things go wrong, and to help recreate known states with the push of a button.

Test and Build — These tools automate common developer tasks including compiling source code into binary code, creating executables, creating documentation, and providing a configurable test engine to simulate the live environment (partial or aggregated) for validation against enterprise systems.

As you are no doubt aware, the right tool chain will automate IT services, provide real-time visibility into system and application performance, and give you a single source of truth. More important than an individual tool’s capabilities, though, is how closely they match the organization’s strategic goals which, again, is the key qualifier to look for when hunting down the right tool. That’s the way to maximize your chances of achieving quality IT service delivery goodness.

Now for the shameless, but totally relevant, plug: One of the best resources to begin your tool requirements analysis, or to use as a yardstick against which to measure potential tool candidate capability, is the Navy Process Reference Model (NPRM). The NAVITSMO developed the NPRM (now in version 3) by leveraging the 26 ITIL processes and adding additional processes and components from ISO/IEC-20000, COBIT and others, to produce a DoD-specific process reference model containing 34 interlocking IT processes — everything from Access Management to Workforce Management — that defines inputs, outputs, controls, roles, responsibilities, skill recommendations for the process roles, incorporates Continual Service Improvement (CSI) principles within each process... and (insert drumroll) tool recommendations for each process.

One of the best resources to begin your tool requirements analysis, or to use as a yardstick against which to measure potential tool candidate capability, is the Navy Process Reference Model (NPRM).

The NPRM tool recommendations are not recommendations for specific tools or tool brands. Rather, they are a collection of ‘should’ statements that typify a tool or toolset specific to the process it is designed to support. For instance, everybody’s favorite ITSM process WORKFORCE MANAGEMENT is depicted in the NPRM with a table of nine tool ‘should’ statements that inform the requirements analysis team about what a tool specific to supporting Workforce Management should be able to do, to wit:

NPRM Workforce Management Functional Tools Requirements Recommendations

The tool should:

  • Enable connection to the knowledge management system for résumés, interview information, and employee data.
  • Enable connection to the time management system.
  • Incorporate an automated call distribution mechanism.
  • Facilitate workforce forecasting and modeling.
  • Facilitate workforce estimating (time and material).
  • Incorporate an integrated skills database.
  • Have a workforce scheduler.
  • Be able to track individual certifications.
  • Incorporate a professional development and review system.
  • In keeping with the NPRM’s non-prescriptive and vendor-neutral nature, these recommendations are ready-made checklists any design team or requirements analysis team can use to judge the capability of a particular tool against the ideal. In this instance, Workforce Management, a critical functionality in any IT organization, is designed as a tool-agnostic process within a DoD enterprise that acknowledges the need for a tool or tools to automate the process. It is not an exhaustive list by any means and is intended as thought leadership for organizations that struggle with tool criteria and selection.

    Other equally valid considerations for the evaluation team:

    • Functionality — suitability, accuracy, interoperability, compliance and security.
    • Reliability — maturity, fault tolerance, recoverability.
    • Usability — understandable, learnable, operable.
    • Efficiency — time and resources.
    • Maintainability — analyzable, changeable, stable, testable.
    • Portability — adaptable, installable, conformant, replaceable.
    • General Vendor Qualifications — maturity of the vendor, market share, financial stability.
    • Vendor Support — warranty, maintenance and upgrade policy, regularity of upgrades, defect list with each release, compatibility of upgrades with previous releases, email support, phone support, user groups, availability of training, recommended training time, price.
    • Licensing and pricing — open source or commercial, licensing used, rigidity (floating node or locked license), price consistent with estimated price range, price consistent with comparable vendor products.

    This is a good place in the discussion to interject a few thoughts about tool integration. While a holistic viewpoint and mindset for the IT enterprise are typically evangelized in ITSM dogma writ large, the exact opposite mindset is typified when tools are considered. Managers want a tool to do a thing, a job, run a report, provide specific data from specific repositories, automate a specific function, etc., ad nauseum; very little consideration is ever given to implementing a tools strategy that incorporates the larger environment and its needs. And who can blame them when there is little (if any) enterprise policy on the subject?

    Many times we ask the question, Will this tool break anything? when the question we should be asking is, Does this tool already exist as a hidden capability in other tools? or Can this tool perform more of the identified requirements than just the segment I’m concerned about? Often a tool suite can slay multiple requirement dragons and introduce quantifiable cost savings down the road.

    One of the many capabilities of the NAVITSMO is as a resource to assist commands who desire consultancy with tool reviews and recommendations, particularly when approached from an international standards and industry best practice viewpoint. Tools are an important force multiplier that provide automation, visibility and necessary oversight into the inner workings of the beast — which if implemented correctly and thoughtfully, can harness said beast to more efficiently do our collective bidding.

    About the NAVITSMO

    Chartered in April 2012, the NAVITSMO provides IT Service Management thought leadership and assistance by creating usable products and services for the Navy ITSM community. The NAVITSMO strives for alignment of enterprise IT architecture through discreet but interlocking practice areas to help define and support organizational IT governance and management requirements. The NAVITSMO résumé boasts industry-certified expertise in ITIL, COBIT, Program and Project Management, DoDAF, IT Risk Management and Control, IT Skills Framework, Service Quality, CMMI, ISO/IEC-20000, ISO/IEC-33000, Information Security, Enterprise IT Governance, and Assessment and Audit.

    For More Information

    The NAVITSMO Wiki is located at: https://www.milsuite.mil/wiki/Navy_IT_Service_Mangement_Office/. Access to milSuite is CAC controlled. First time users will need to register their CAC with milSuite by clicking the ‘Register’ button, confirming their information and clicking ‘Submit’.

    Related CHIPS Articles
    Related DON CIO News
    Related DON CIO Policy
    CHIPS is an official U.S. Navy website sponsored by the Department of the Navy (DON) Chief Information Officer, the Department of Defense Enterprise Software Initiative (ESI) and the DON's ESI Software Product Manager Team at Space and Naval Warfare Systems Center Pacific.

    Online ISSN 2154-1779; Print ISSN 1047-9988