Email this Article Email   

CHIPS Articles: John Pope

John Pope
SPAWAR Director, Data Center and Application Optimization
By Tina Stillions and Lisa Hunter - January-March 2013
John Pope took on a new role as Director of Data Center and Application Optimization (DCAO) at the Space and Naval Warfare Systems Command (SPAWAR) in October 2012. In this capacity, he oversees the execution of the Navy's data center consolidation effort. Prior to assuming this position, he served 29 years in the U.S. Navy, most recently as the fleet support program manager within SPAWAR’s Fleet Readiness Directorate. The Navy’s data center consolidation effort is a little over a year old. Mr. Pope was interviewed by Tina Stillions in December. Edited excerpts from the interview follow.

Q: Can you give a brief update on the Navy’s data center consolidation effort? What has been accomplished so far?

A: Since the Navy’s data center consolidation effort began in late 2011, the Navy has consolidated 18 data centers from across the country into three U.S. Navy Enterprise Data Center (NEDC) locations. We moved 107 systems and their 600-plus servers into an enterprise environment so that they could be hosted in a more efficient manner. As we transitioned systems, we refined our processes and captured some good lessons learned.

The NEDC sites are maturing in how they work together as an enterprise, and customers are seeing the benefits in terms of standard hosting services and lower rates. Although we are still experiencing some challenges in the NEDC, due to the consolidation of a variety of systems from inconsistent hosting environments, our data center technicians have been able to work with the legacy application owners to fix application hosting issues rapidly as they are identified.

In fiscal year 12, we were required to physically move servers and equipment for some systems due to technical limitations that prevented virtualization. We learned a lot about the health of the applications as we migrated them into the NEDC. It is not simply a matter of acquiring the software, copying it onto a disc and moving it over — and it’s certainly not like loading Microsoft Word on to your PC. We discovered that there are a lot of systems out there that need security help and NEDC transition engineering to get them to a state to be able to be hosted and functioning properly in an enterprise environment. This has provided us with the opportunity to increase the efficiency of the Navy’s IT infrastructure and improve the security of our data.

By capitalizing on state-of-the art virtualization technology and efficient data center management, we are saving money on both systems administration manpower and power usage. We have tracked the percentage of servers that we are able to virtualize through the transition process, with an internal target of 90 percent.

Reducing the number of data centers and using more efficient technology enables a net savings in maintenance costs. Additionally, bulk hardware and software procurements promote competitive pricing. When we take physical servers and virtualize them, we are able to ensure they are ‘right sized’ according to requirements, which saves the Navy money in terms of providing optimal space and computing power to run the systems effectively.

The NEDC sites have also established sophisticated information assurance monitoring techniques and disaster recovery processes to better protect data. During Hurricane Irene, which impacted both the Charleston and New Orleans NEDCs, there were security protocols and continuity of operations measures in place to avoid any system outages. We plan to work with our future customers to provide proactive steps to prepare their systems for transition to NEDC sites, including expediting the transitions and reducing the time and resources needed to move legacy systems into their new hosting environments. We published a NEDC catalog of services and a common NEDC rate card, so system and application owners can clearly understand their service and pricing options. The rate card standardizes data center costs across our NEDC sites and provides system owners with more detailed cost information for each level of service.

There are a lot of players when it comes to running a system on a Navy network, including the hosting facility, the security accreditors, and those that support it day-to-day with information assurance patches. In many cases, these players have been working independently. Part of our role working with so many application owners is to bring the various players together. We have developed stronger relationships and our communication channels are getting better. Even though it’s still not easy, this tends to minimize some of the uncertainty, builds trust among everyone involved and streamlines the whole process a little bit.

Q: What is the data center consolidation approach? Will your team work to execute a new plan, stay with the already established approach, or create a blended plan that encompasses a little of both old and new?

A: The team is following the same success formula that was in place when I came on board. We have a list of systems and legacy data center sites that need to be consolidated, closed or removed from the list. We are working our way through that list, and are going after the sites that provide the highest return on investment. We are also looking at the sites that are out there and are part of some other IT efficiency effort, so that we can optimize our efforts.

The current FY13 plan is to transition more than 150 systems from 22 data centers into consolidated government and possibly other hosting facilities. To the greatest extent possible, these will be full closures — meaning, data center operations will no longer be conducted in the facility and transformation of the facility or room to its final disposition state will be underway. Our objective during this process will be to provide the most cost effective, efficient and secure hosting services to our Navy customers and build on the momentum created in FY12.

We will continue to follow our fourstep process for consolidation. As part of the assessment phase, transition and cost teams perform on-site visits to data centers and gather information on current capabilities, IT assets, system requirements and cost elements. During the engineering analysis phase, they conduct a more detailed analysis of the systems and hardware, identify any dependencies, and develop the transition plans to migrate systems to the targeted hosting facility.

In the transition phase, SPAWAR works closely with commands and data center system and application owners to prepare their systems for migration and the execution phase of transitions. During this phase, it is critical for the legacy data centers to have all the system security documentation in place and perform any necessary mitigation actions. Finally, in the sustainment phase, the systems and applications are hosted based on the agreedupon service levels and established NEDC rates.

We have also seen how important application rationalization is to the process. The Navy is focusing on application rationalization in order to fully understand the functionality and business value-added benefit for each application in the DON portfolio. If a system does not justify its up-front or support costs, then decisions must be made to remove it from the portfolio. Functional redundancies can be identified during the process and converged to create a reduced application count. From a data center consolidation perspective, it is very important to be involved in this process. As a technical advisor, our engineers will evaluate applications against technical criteria to help application rationalization decision makers. Ideally the decision to sunset applications will be made before they are consolidated into a NEDC, which will save us time and money.

Because there are multiple groups addressing IT infrastructure issues and trying to optimize them, we will make every effort to work together. What I’m trying to do this year is work synergistically with other organizations in targeting IT efficiency savings. If we are focusing on the same site, we’ll work to synchronize efforts to reduce redundant effort and site disruption. This is a positive change in the way we do business, and I think it will yield better savings for the taxpayer.

Q: Has the schedule changed? Is the effort on schedule or behind schedule?

A: We have developed a preliminary integrated master schedule for executing the 22 site closures targeted for FY13 and have completed nearly all of the site visits. Typically, the first third of the year entails conducting detailed engineering analyses where we verify system requirements and lay out a more detailed, risk-based schedule. The remainder of the year will be allocated for executing the site transitions. My feeling is that we certainly have a challenging, but achievable schedule for FY13.

Q: Can you explain the Data Center and Application Optimization office alignment with the Fleet Readiness Directorate and the reason behind it? How will it impact the consolidation effort?

A: In November 2011, SPAWAR was appointed by Assistant Secretary of the Navy for Research, Development and Acquisition as the Navy data center execution agent and technical authority. The Data Center Consolidation Task Force was stood up to oversee the consolidation process and the sustainment of the transitioned systems. In November 2012, SPAWAR established the Data Center and Application Optimization (DCAO) office within the Fleet Readiness Directorate (FRD) to replace the task force with a more permanent organization.

The FRD provides a fleet focus on readiness and sustainment of in-service C4I systems. It was an ideal candidate home because the FRD was already working issues that affect fleet readiness, and certainly data centers are an element of that. The fleet modernization aspect of what we do in FRD is very similar to data center consolidation. The process of taking a ship, removing old capability, and putting in new capability and optimizing it is not unlike what we do when we go into a data center and move that application to a more efficient, modern system.

As we strive to make sure the Navy’s data is secure and available, we also want to host it in the most cost-effective manner possible. By administratively aligning under the FRD, we can ensure the Navy’s data center consolidation effort will have the resources and support required to achieve its mission. It remains a high-priority Navy IT efficiency initiative, under a different name.

Q: Have the Navy Enterprise Data Centers (NEDCs) changed, including expanded or retracted? What is the current status?

A: Last year, we migrated systems to three NEDC sites located in Charleston, New Orleans and San Diego. We are currently working with the Department of Navy Deputy Chief Information Officer to evaluate other NEDC options, including Marine Corps facilities. In addition, we are reviewing commercial hosting options. As we anticipate growth and project forward, SPAWAR’s chief engineer is developing the technical architecture for data centers. Part of this will determine the optimum lay down of Navy data centers, while also taking into consideration various technical factors, such as connectivity, people, power usage and data center location in relation to the customer. At that point, we can make some recommendations as to where we think the centers should be located. I expect the number of data centers will grow in some amount as a result.

Q: Are any figures available yet as to how much money the Navy anticipates saving with consolidation?

A: The short answer is: not yet. However, there have been some improved projections about savings. We are taking a look at some of the sites that were closed in FY12, the applications that are hosted today, and doing a comparison of the before and after hosting costs. When the applications were hosted at their old data centers, the site often did not keep data on exactly that one application, running on one server, sitting on one rack, located in one room. The [owners of the] site didn’t know how much money it cost to run that one application. They were running several; not just one. We have to go back and estimate what the cost was for individual applications running on the old system, and do some analysis while it runs in our data center today, so that we can get a better estimate of cost savings.

There are some models available that can tell us what we should be saving. But it is kind of like the car mileage estimates we get from car manufacturers. There are a lot of variables. Metaphorically speaking, I want to get some of those actual mileage estimates so that we can get a more accurate picture. In doing that, we can also help the modelers by telling them this is a better set of models and whether they are doing it right or not.

As other federal agencies have discovered — and the U.S. Navy is no different — accurately measuring cost savings is an extremely challenging effort. With our sites providing a computing environment, security and support services at an enterprise service rate, we hope to have a better understanding of what the hosting costs for consolidated systems will be. We are collaborating with the legacy data centers to gain an accurate characterization of the current operating costs, so we can determine actual cost savings. Moving forward, we are developing a good cost baseline from which to measure our efforts, so I anticipate we will have a better understanding of what our savings are in the near future.

Q: Have there been any technical challenges or glitches in the effort so far? Anything that wasn’t anticipated but that was dealt with immediately to keep the effort on course? Do you foresee any technical challenges as you step into this new position?

A: Some of the technical challenges we are experiencing include trying to characterize the health of the applications that we are moving over. We are using a red, yellow, green status approach. Applications are green if they are modern, running smoothly, have security patches installed, everything is current, and it is just a matter of moving the application from one hosting environment to a more efficient environment. Then you have other application categories that are older and may need a more current operating system or security patch, or more engineering to clean them up a bit. If an application is in what I categorize as the red zone, the application may not be able to be moved.

The challenge occurs if I lay out a transition schedule that assumes a certain mix of some easy, moderate and hard (green, yellow and red), but we do not see the full picture until I am well into transitions. At that point I may see a skewed picture of more yellow and red systems, in which case it starts to affect the overarching schedule. It is at that point that you begin to comprehend the difficulty of the task at hand and what impact it will have on perturbing the schedule.

Technically, however, the consolidation effort has been fairly straightforward. We documented quite a few of the lessons learned in FY12, so that our transition teams can draw from this year to refine our standard operating procedures. The certification and accreditation piece, and ensuring all the system security documentation was in place, was more time intensive than originally planned. We spent a lot of time working to expedite the process without sacrificing security precautions. One of the significant lessons learned was the importance of engaging with our customers and stakeholders. We were able to see the need to keep them informed of our process and progress. We will continue to conduct site visits to most of the legacy data centers to meet with their leadership, address any concerns and gather feedback on the process.

Q: Have any green techniques and technologies being incorporated into the strategy to manage and minimize the Navy’s carbon footprint?

A: We received funding from the Office of Naval Research to evaluate new technologies that can make the data center more energy efficient. One is a smart metering technology for our data centers in Charleston, New Orleans and San Diego. This technology enables the Navy to understand energy use, identify improvement areas and track energy consumption.

The second project is a new self-cooling technology for servers. This technology is the size of a placemat that is placed directly on servers to cool them, rather than cooling all of the surrounding air. This is much more efficient than cooling the entire room. We are definitely trying to determine what drives our cost, such as power and cooling. Prior to consolidation, a site may have metered power for the whole data center room and now we have technology that allows us to monitor our power and cooling better. I can cool more locally and that is certainly a big savings.

Q: How will you measure success?

A: We will measure our success by how well we efficiently and effectively consolidate legacy data centers and maximize the Navy’s return on investment. Transition progress will be tracked by reporting on metrics for site consolidations and system and server transitions. That level of detail will enable us to more fully understand the scope of our efforts. Another metric we will be tracking will be the percentage of savings in sustainment costs, which is the difference between the previous cost to host systems and how much it costs post-consolidation.

Another part of return on investment is the reuse of software and hardware. Our office is implementing processes that will capture cost avoidance through the reuse of assets in the enterprise data centers and in offsetting new IT purchases across the Navy. I also want to track the number of physical servers that were virtualized, so that we can ensure the ‘right-sizing’ of existing applications and reduce the physical footprint required to sustain Navy applications. These metrics will provide leadership with a rounded picture of how well we are succeeding in our mission to consolidate the Navy’s data centers.

This is a challenging job, and success isn’t going to be easy. Now that the NEDC sites are up and running, we’re already seeing enhanced security, efficiency — and reliability. Data center consolidation isn’t something we can succeed at with just a couple of years of effort. There are enterprise behaviors and results we are trying to achieve, and I think we are getting there a step at a time.

Tina C. Stillions is with the SPAWAR HQ public affairs office.

Lisa Hunter provides strategic communications support to SPAWAR’s Data Center and Application Optimization office.

John Pope
John Pope

Special floors and energy-efficient cooling technology are part of the Navy's effort to realize fiscal and energy savings. Data center photos by Rick Naystatt/SPAWAR HQ.
Special floors and energy-efficient cooling technology are part of the Navy's effort to realize fiscal and energy savings. Data center photos by Rick Naystatt/SPAWAR HQ.

Data center facilities manager, Bobby Nutting, examines color-coded wires, which replaced more expensive copper wires, at the NEDC San Diego.
Data center facilities manager, Bobby Nutting, examines color-coded wires, which replaced more expensive copper wires, at the NEDC San Diego.
Related CHIPS Articles
Related DON CIO News
Related DON CIO Policy
CHIPS is an official U.S. Navy website sponsored by the Department of the Navy (DON) Chief Information Officer, the Department of Defense Enterprise Software Initiative (ESI) and the DON's ESI Software Product Manager Team at Space and Naval Warfare Systems Center Pacific.

Online ISSN 2154-1779; Print ISSN 1047-9988