DDoS Detection, Mitigation, Orchestration, and Threat Intelligence
Consolidated Security & CGNAT
TLS/SSL Inspection
Web Application Firewall
Application Security & Load Balancing
Analytics & Management
CGNAT & IPv6 Migration
Server load balancing (SLB) is a data center architecture that distributes network traffic evenly across a group of servers. The distributed workloads ensure application availability, scale-out of server resources and health management of server and application systems.
Server Load Balancer systems are often located between the Internet edge routers or firewalls inside the DMZ security zone and the Internet facing application servers.
Server Load Balancer Typical Configuration
In this configuration, the SLB systems act as a reverse proxy, presenting the hosted services to remote network clients. Remote clients over the Internet connect to the SLB system which masquerades as a single application server, then forwards a connection the optimal application server.
Server Load Balancing (SLB) products have evolved to provide additional services and features and are now called Application Delivery Controllers (ADCs). ADCs consist of traditional Server Load Balancing features, as well as Application Acceleration, Security Firewalls, SSL Offload, Traffic Steering and other technologies in a single platform.
Servers host application level services like business applications and network services like firewalls and DNS services. The load on application servers is ever increasing including network throughput requirements as well as CPU, memory and other server resources. Each server has a limit on the amount of workload resources that can be supported. To increase server capacity, adding additional server systems is required. In a load balanced configuration, additional servers can be added dynamically to increase capacity. Server capacity can be added live without affecting the existing systems.
Load balancing for application servers is common to provide highly-available application infrastructures. When multiple servers are load balanced, any single failure does not cause serious outages. User sessions which were served by the failed server are routed to other firewall systems and user sessions are re-established.
Server maintenance is difficult in non-load balanced environments. Changing configurations on live systems can easily cause unforeseen issues and outages. Systems behind a server load balancer can be removed from service without user disruption, and either upgraded, replaced or updated with new configurations. These systems can be tested by operations before returning to an operational state.
Application Service A10 Thunder CFW
There are a variety of methods that dictate how back-end servers are selected by the load balancing device. Some of the algorithms or criteria for selecting servers include:
A10 Networks Thunder family of Application Delivery Controllers provide a broad and advanced set of features and are deployed in most of the world’s largest carrier and service provider networks.
Server Load Balancer features include:
Take this brief multi-cloud application services assessment and receive a customized report.