Networking

How Popular Sites Handle Millions of Connections Every Second

Learn how websites handle massive amounts of traffic

Every day millions of people are using the internet. Each of these people will connect throughout their day to a whole host of websites and services; some websites will handle millions of these links on their own. The world’s largest websites, websites such as Facebook and YouTube, work to billions of people around the world, and any disruption is a significant loss of revenue.

Websites can use a number of techniques to ensure that their services are always available and that their servers stay up and running. There is a limit to the number of users that a website can support concurrently without putting in place measures to accommodate a large volume of traffic. It may become inaccessible to anyone if a website experiences an unexpectedly high volume of traffic.

The Popularity of Facebook

Facebook is one of the internet’s most popular websites. The social media platform has billions of users around the world and offers mutual support to millions of them. We all know that Facebook is famous, but some people are still surprised by the level of its popularity and depth of scope. For example, Facebook users send more than 10 million messages while collectively clicking the “Like” button more than 4.5 billion times a day.

Facebook has mixed proven methods with its own proprietary technology to deal with the large volume of traffic. It helps them to reduce the demand on any database while also reducing the amount of information to be sent and received between their users and servers. This is vital given how many Facebook users stay connected to the service throughout the day through their mobile devices. This not only increases the user experience, but also helps Facebook to maintain as low as possible the costs of running those servers.

Keeping Servers Online

Servers store the content they need to run on websites and online services. You download the required data from the web server that hosts it whenever you load a website or online service. Data is sent to users in the form of packets from web servers, and individual files are split into thousands of small packets. It allows multiple users to download various parts of the file and allows more users to download a single file without triggering slowdowns.

But after a certain stage, this alone is not enough. Numerous measures need to be taken by the busiest online servers to ensure that they remain constantly up. Each minute of downtime for companies like Facebook reflects lost revenue and frustrated users. Online services that gain a reputation for unreliable uptime on the server will struggle to construct their user base.

Redundancy is the first line of defense against failure of the server. It simply means that users will be redirected to a failed database instead if a network is inaccessible. Servers are designed to handle multiple connections simultaneously, but more powerful servers are required on the busiest websites. There is the probability of hardware failure, as with any electronic device. The data can be irretrievably lost if a server hard drive breaks without a backup.

Balancing the Load

Load balancing is just what it sounds like-ensuring that multiple servers spread the computational load needed to serve all connected users. Online services such as Twitter manage far too many links to deal with any single server. Whenever a server receives a new request, the domain name server rotates circularly through the domain-related IP addresses. For many websites, this is a fine solution, but the largest online providers must use their own proprietary protocols to handle the load.

Your connection is automatically routed to a server that can handle the load when you connect to Facebook. Facebook will stop sending new connections if a server gets overloaded. Load balancing used to include physical hardware, which is still used occasionally, but can now be performed in the cloud as well.

Load balancing is not just about minimizing downtime; it also serves an economic purpose that is realistic. Consider a company like Facebook with many servers running around the world. This takes very little energy when a server is sitting idle and does not manage client requests. Nonetheless, the server has to work harder and gain more power as more users connect. You can see the pattern and make the calculations possible.

Facebook found that servers used more energy during quiet periods than they planned when they were idle. But what wasn’t expected was that under medium load servers were drawing as much power as under low load servers. Therefore, it is more cost-effective for Facebook to have its servers at moderate capacity or sit idle rather than under low load.

Keeping Servers Safe

A database is a physical object, a computer that can be broken or damaged. If the database is physically damaged or destroyed, incoming connections can not be acknowledged. It means that a data center problem-power outage, natural disaster or even a plumbing issue will make critical servers unavailable.

Like any computer, if your processor is under heavy load, a server can get pretty hot. The data centers where the servers are housed are kept air-conditioned and cool in addition to the cooling systems built into the servers themselves. In addition, climate control systems often regulate moisture, preventing too much humidity in the area.

This may also need to be built to take into account the potential for natural disasters, depending on where the data center is located. For example, data centers in California are built in buildings designed to withstand earthquakes, while the database racks themselves are reinforced and designed to prevent them from collapsing. The largest tech firms will have servers around the globe as a result

Protection and Monitoring

A distributed denial of service attack (DDoS) is a type of cyber attack that uses multiple simultaneous connections to overwhelm an internet server and make it inaccessible. Data centers hire engineers to track and respond to any threats to their networks for irregular traffic. Besides a number of automated methods, this is commonly used to prevent DDoS attacks.

Even with these defenses in place, however, successful DDoS attacks are still taking place and can cause serious problems for data centers and their customers. Therefore, data centers will have procedures in place when they are detected to respond to DDoS attacks.

Today’s servers manage thousands of connections for the most popular websites and online services. For millions of users using resources at the same time, ensuring that servers are not overloaded allows organizations to have sufficient infrastructure in place. Intelligently managing loads across multiple servers enables websites to make the most effective use of their available resources, while redundancies ensure that backups are available when things go wrong.

You load content from various servers whenever you link to a website or scroll through your Facebook feed. All happens to the end user easily and almost immediately, but there’s a lot more going on behind the scenes than it appears first. Without measures in place to better disperse the load, it would quickly overwhelm servers for the largest websites.

Tags

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
Close
Close