Load balancing is a set of techniques that configure the servers and other hardware in such a way that the workload is distributed equally among the servers, thereby reducing the dependency on any single server and making it easy to add resources in such a way that performance is scaled according to the resources added.

INTRODUCTION:

Load Balancing is a method to distribute workload across one or more servers, network interfaces, hard drives, or other computing resources. Typical data center implementations rely on large, powerful (and expensive) computing hardware and network infrastructure, which are subject to the usual risks associated with any physical device, including hardware failure, power and/or network interruptions, and resource limitations in times of high demand.

SUMMARY:

Goals of Web Load Balancing:

Depending on what the goals are, the configurations of the servers and hardware will vary a bit. There are two potential goals that need to be met by web load balancing. Those of scalability and those of availability. The two goals are not exclusive though. Being scalable will inevitably increase the availability and vice versa. While determining your needs, your web hosting provider will help you understand your needs.
Cloud computing, which involves visualization, distributed computing, networking, software and web services. A cloud consists of several elements such as clients, data center and distributed servers. It includes fault tolerance, high availability, scalability, flexibility, reduced overhead for users, reduced cost of ownership, on demand services etc.
Central to these issues lies the establishment of an effective load balancing algorithm. The load can be CPU load, memory capacity, delay or network load. Load balancing is the process of distributing the load among various nodes of a distributed system to improve both resource utilization and job response time while also avoiding a situation where some of the nodes are heavily loaded while other nodes are idle or doing very little work. Load balancing ensures that all the processor in the system or every node in the network does approximately the equal amount of work at any instant of time. This technique can be sender initiated, receiver initiated or symmetric type (combination of sender initiated and receiver initiated types).

Our objective is to develop an effective load balancing algorithm using Divisible load scheduling theorem to maximize or minimize different performance parameters (throughput, latency for example) for the clouds of different sizes (virtual topology depending on the application requirement).

CONCLUSION:

The basic purpose of implementing scalability in web based system is to ensure that you can gracefully increase performance by adding new components. Through web load balancing, this is possible by utilizing a various set of tiers that have been explained in this article. We hope you found it useful.