The NonConformist Techie

Share this post

What is C10k Problem? Why Should you Care?

rubyvalappil.substack.com

What is C10k Problem? Why Should you Care?

Factors to consider to improve server performance

Ruby Valappil
Oct 24, 2022
Share this post

What is C10k Problem? Why Should you Care?

rubyvalappil.substack.com
lighted city at night aerial photo
Photo by Nastya Dulhiier on Unsplash

In our last newsletter, we discussed how Reverse Proxy handles millions of connections at lightning speed.

It was during that time I heard of the C10k Problem. Apparently, NGINX was written by Igor Sysoev to solve this challenge.

Thanks for reading The NonConformist Techie! Subscribe for free to receive new posts and support my work.

It took me a while to understand the concepts and problems at a deeper level. This Article is my attempt to re-iterate my understanding, and I hope it helps you as well.

What is C10K?

The name C10K was coined sometime in 1999 by software engineer Dan Kegel.

C in this abbreviation stands for concurrent connections, and 10K stands for numbers. Together, it denotes the problem as a 10K concurrent connections problem.

C10K defines the challenges web servers face in handling a large number of concurrent requests at the same time.

Many of us would have started to write server-side code when the servers were already sophisticated and modern enough to handle many connections and provide better throughput.

Multicore processors that can process multiple tasks simultaneously with parallel processing and multithreading have become the norm.

If anything, cloud services have further made these processes easier and made scaling out much more convenient than ever before.

As the servers became more sophisticated over the years, the number of consumers has also increased. The 10K problem evolved into the 10M problem and then evolved much further.

A better understanding of what enables a server to handle thousands of requests per second would help us make the right call when our server has to scale up to handle growing web traffic.

Let’s first try to understand two common ways software applications and servers scale.

By Scaling, we mean the ability of a system to grow or shrink based on the changing demands.

  1. Vertical Scaling

  2. Horizontal Scaling

Vertical Scaling

Vertical scaling is adding more resources to an existing system.

This can include but is not limited to increasing the processing power, memory, storage, or network speed.

In simple words, vertical scaling makes an existing machine more powerful by adding more resources to it.

Horizontal Scaling

Horizontal Scaling is adding more machines to our infrastructure.

This means, instead of adding more resources to an existing machine, new servers with similar capabilities to the old servers are added to the existing infrastructure to meet the increasing demands.

In a cloud environment, adding more server instances is an example of horizontal scaling.

When a system is no longer able to accommodate the incoming traffic in an efficient manner, organizations make the decision to scale their IT infrastructure using either horizontal or vertical scaling.

But what if the solution to improve a server’s performance is not always restricted to scaling up (vertical) or scaling out (horizontal)?

Let’s take a look at another factor that contributes to performance optimization.

Optimizing Network Technology

Network optimization has become less of a headache with the introduction of cloud-based services but even in a cloud-based service, without the right architecture, one would end up with similar issues with server performance as that of on-prem servers. Not to mention, every additional resource will also spike the cloud bill.

In typical IT scenarios, various network optimization tools and best practices are used to monitor and improve networking technology.

Some of these techniques used to improve the performance of servers are :

  1. Proxy Servers and CDN

Proxy Server is a middleman. It sits in front of the application servers.

These servers handle incoming web requests differently compared to usual web servers. Instead of handling each web request using a single thread, proxy servers either follow a multi-threading or multiprocess architecture to handle multiple concurrent requests.

Mainly, these servers reuse existing TCP connections and speed up their performance.

CDN (Content Delivery Network) servers, on the other hand, are used to cache web resources (static content). This means static content can be geo-placed close to consumers, and the content can be served faster.

2. Load Balancers

Like a proxy server, a load balancer sits in front of our web and application servers.

Its main job is to route incoming client requests and forward them to servers in a way that all our servers get a fair share of work, and not one server is overloaded with incoming traffic.

Load balancers can also detect if a new server is added or one went down and route the traffic among servers that are online. This makes sure that our resources are utilized efficiently and that the server’s performance doesn’t go down.

Conclusion

Whether your application is deployed to the cloud or on-prem physical servers, gaining knowledge of server performance helps us design better systems and reduce infrastructure costs.

Thanks for reading The NonConformist Techie! Subscribe for free to receive new posts and support my work.

Share this post

What is C10k Problem? Why Should you Care?

rubyvalappil.substack.com
Comments
TopNew

No posts

Ready for more?

© 2023 Ruby Valappil
Privacy ∙ Terms ∙ Collection notice
Start WritingGet the app
Substack is the home for great writing