NIH Enterprise Architecture Home

Network Load Balancing End-node
Configuration Pattern

Description

Load balancing technology is used to balance workload across servers to improve availability, performance, and scalability. Network Load Balancers are implemented at the workgroup/server switch layer. Load balancing increases performance consistency and application availability and are therefore recommended for NIH enterprise applications. A one-to-one or one-to-many mapping can be used to access a specific server or a group of servers respectively. Additionally, it offers multiple algorithms for mapping user requests to servers (e.g., round-robin, random, or depending on server utilization) and provides proxy services.

The End-node Configuration does not provide NAT. Therefore the load-balanced servers can access other resources in the network directly without having to utilize the load balancers’ proxy services; this facilitates access to backup and other services.

When deployed singly, a load balancer can improve performance by efficiently allocating workload across multiple servers. In order to deliver improved availability, the load balancers must be deployed in pairs, with hot standby configured. Otherwise, the load balancer can become a single point of failure for the servers.

Diagram

Network Load Balancing End-node Configuration Pattern

Benefits

  • Allows better performance and availability than when networks are configured without load balancing capabilities.
  • Allows servers to directly connect with other network resources (e.g., backup services).

Limitations

  • There are additional hardware, software and support costs required to implement load balancing, regardless of the configuration.

Time Table

This architecture definition approved on: February 8, 2005

The next review is scheduled in: TBD