Welcome to my blog. This is where I sometimes share interesting stories, challenges and breakthroughs from my journey in the realm of software development.

ALB vs NLB - AWS load balancers benchmarked for web applications

At AWS load balancers are called Elastic Load Balancers (ELBs). An ELB distributes incoming traffic among different targets thus ensuring high availability, fault tolerance, scalability and efficiency.

(* for cross-region high-availability one could have Route 53 route traffic to multiple ELBs in different regions using a Routing Policy e.g. weighted routing policy)

Traffic targets can be EC2 instances, ip addresses, lambda functions or other ELBs - depending on the ELB type in use. AWS currently offers four types of load balancers:

Objective of the experiment

Out of the 4 types of load balancer offered by AWS I was particularly interested in how the ALB and NLB would fare when tasked to serve a web application under relatively heavy load. In particular, I was interested in how the high performing, low-latency NLB would stack up against the more HTTP-optimized ALB when handling what’s almost always HTTP or layer 7 (OSI model) traffic.

Setup and environment

For the purposes of this experiment I deployed two separate Elastic Beanstalk (Web server) environments in the North Virginia (us-east-1) region.

For both environments I chose the Managed platform option, Node.js (with defaults) and used Sample application application code. Under configuration presets I chose High availability while in the Capacity section I set Min and Max to 2 instances of t3.micro instance types.

The only difference for the two environments was the load balancer type set under Load Balancer Type.

I then clicked “Skip to review” to allow aws to configure everything else per default.

Assumption: considering I ran my code from my local machine, I made the assumption that any variation in performance or speed at ISP/connection/DNS level was negligible.

Testing procedure and execution

Benchmarking was done with the help of the “Apache HTTP server benchmarking tool” ab. The GNU awk text parsing tool was used to parse the ab tool results.

A ab benchmarking test call with the above configuration where the endpoint is set to a dummy http://alb-endpoint-address.us-east-1.elasticbeanstalk.com url would look like this:

ab -c 50 -t 5 http://alb-endpoint-address.us-east-1.elasticbeanstalk.com

I used the following script to automate the benchmarking procedure:



# These endpoint addresses were replace with actual endpoint addresses


# Check the first argument to determine if we are testing the nlb or alb
if [ -n "$1" ] && [ "$1" = "nlb" ]; then


# Write the csv header row
echo "Attempt;Connect;Waiting;Processing;Total;" >> $OUT_FILE

for ((i = 1; i <= NUMBER_OF_TESTS; i++)); do
    echo "> Performing $i/$NUMBER_OF_TESTS benchmark for $CURRENT_LABEL..."

    # -c is concurrency, -t is timelimit, 2>&1 redirects stderr to same location as stdout

    # awk does pattern matching for patterns enclosed within forward slashes
    # the condition is executed against each line of the input file
    # {print $4} is the awk action command, and refers to the fourth field (column) of the input line
    connect_median=$(awk '/Connect:/ {print $4}' tmp.txt)
    waiting_median=$(awk '/Waiting:/ {print $4}' tmp.txt)
    processing_median=$(awk '/Processing:/ {print $4}' tmp.txt)
    total_median=$(awk '/Total:/ {print $4}' tmp.txt)

    if [ "$DEBUG" -eq 1 ]; then
        echo $connect_median
        echo $processing_median
        echo $waiting_median
        echo $total_median

    rm tmp.txt

    # results for each test are written to the output file
    echo "$i;$connect_median;$waiting_median;$processing_median;$total_median;" >> $OUT_FILE

    echo "> Cooling down for $SECONDS_COOLDOWN seconds..."

I’ve included comments to explain the key concepts within the code above.

Before executing a shell script file, you have to give it executable permissions with the chmod +x my-script.sh command. Once this is done, the the shell script file is to be called twice, once for benchmarking each load balancer type.

./my-script.sh alb
./my-script.sh nlb

Results and observations

The results obtained (in csv format) can be downloaded here: alb, nlb, combined.

I’ve plotted the comparison of connect, waiting, processing and total median times of ALB vs NLB in separate graphs with the x-axis always representing attempts.

Connect time
Waiting time
Processing time
Total time

From observing the graphs one can deduce that while both the connect and waiting graphs do show some entries that do not follow the general pattern, on average the red line (NLB) is consistenly lower than the blue line (ALB) indicating faster performance.

The processing graph shows an interestingly chaotic initial 20 tests where ALB beats NLB before the NLB line evens out and appears to show steadily better performance overall.

Across the 100 tests the connect, waiting and processing lines sum into an overall result shown in the total graph that demonstrates the NLB load balancer edging out the ALP load balancer in terms of a steadily faster responses.

Conclusion and caution

With the benchmarks done and results showing that NLB on average outperforms ALB in terms of median response times, I am of the opinion further research needs to be done before you choose the NLB load balancer over the ALB load balancer to handle your Web application’s traffic.

The Network load balancer will shine when handling UDP/TCP protocol traffic (eg. video streaming, gaming, messaging etc), however it does not do as much as the ALB does for HTTP/S traffic. The NLB does not come with SSL offloading, sticky sessions, request based routing and many of the other features that are built into the ALB. This in turn means if you require any such functionality, and you opt for the NLB, you will have to manage this yourself.

To learn more about the network and application load balancers follow the links below: