Site icon Vinsguru

JMeter – Scaling out load generators using Docker Compose in distributed load testing

In this post, I would like to show how to create multiple instances of JMeter servers/slaves on demand using docker compose. I assume you have some idea on using docker in JMeter distributed load testing. If not, please read this post first.

Docker Compose:

As part of our application design, we might have a webserver, few app servers and a db server. We would have created different docker images for the web, app and db servers. We need to run all the docker containers and create a network/link them so that they can communicate among themselves for the application to work fine.

 

Docker-compose is a tool to define and run multiple docker containers. With Compose, we describe our multi-container application in a single YAML file, then spin our application up with a single command.

Installing Docker Compose:

Check this link here for detailed steps to install docker compose.

Compose File:

This is a file, in YAML format,  in which we would describe how we want our docker containers to run and link to each other. We define our entire application and networks details in it.  The default path for a Compose file is

./docker-compose.yml

In order to run the JMeter distributed load testing, we would need 1 master and N number of slaves. Using the docker-compose file reference we create a compose file as shown below.

version: '2'

services:

  master:
    image: vinsdocker/jmmaster
    container_name: master
    tty: true
    hostname: master
    networks:
      - vins
  slave:
    image: vinsdocker/jmserver
    tty: true
    networks:
      - vins
  
networks:
  vins:
    driver: bridge

As part of this docker-compose file we have defined how our architecture is going to be to run the JMeter test. Now lets see that in action!!

Running application with Compose:

mkdir tag
cd tag
sudo vim docker-compse.yml
sudo docker-compose up -d
Creating network "tag_vins" with driver "bridge"
Creating master
Creating tag_slave_1
sudo docker-compose scale slave=15
sudo docker-compose ps
    Name                  Command               State          Ports
---------------------------------------------------------------------------
master         /bin/bash                        Up      60000/tcp
tag_slave_1    /bin/sh -c $JMETER_HOME/bi ...   Up      1099/tcp, 50000/tcp
tag_slave_10   /bin/sh -c $JMETER_HOME/bi ...   Up      1099/tcp, 50000/tcp
tag_slave_11   /bin/sh -c $JMETER_HOME/bi ...   Up      1099/tcp, 50000/tcp
tag_slave_12   /bin/sh -c $JMETER_HOME/bi ...   Up      1099/tcp, 50000/tcp
tag_slave_13   /bin/sh -c $JMETER_HOME/bi ...   Up      1099/tcp, 50000/tcp
tag_slave_14   /bin/sh -c $JMETER_HOME/bi ...   Up      1099/tcp, 50000/tcp
tag_slave_15   /bin/sh -c $JMETER_HOME/bi ...   Up      1099/tcp, 50000/tcp
tag_slave_2    /bin/sh -c $JMETER_HOME/bi ...   Up      1099/tcp, 50000/tcp
tag_slave_3    /bin/sh -c $JMETER_HOME/bi ...   Up      1099/tcp, 50000/tcp
tag_slave_4    /bin/sh -c $JMETER_HOME/bi ...   Up      1099/tcp, 50000/tcp
tag_slave_5    /bin/sh -c $JMETER_HOME/bi ...   Up      1099/tcp, 50000/tcp
tag_slave_6    /bin/sh -c $JMETER_HOME/bi ...   Up      1099/tcp, 50000/tcp
tag_slave_7    /bin/sh -c $JMETER_HOME/bi ...   Up      1099/tcp, 50000/tcp
tag_slave_8    /bin/sh -c $JMETER_HOME/bi ...   Up      1099/tcp, 50000/tcp
tag_slave_9    /bin/sh -c $JMETER_HOME/bi ...   Up      1099/tcp, 50000/tcp
sudo docker inspect -f '{{.Name}} - {{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $(sudo docker ps -aq)
/tag_slave_12 - 172.19.0.15
/tag_slave_14 - 172.19.0.16
/tag_slave_13 - 172.19.0.12
/tag_slave_15 - 172.19.0.17
/tag_slave_11 - 172.19.0.11
/tag_slave_10 - 172.19.0.10
/tag_slave_9 - 172.19.0.13
/tag_slave_8 - 172.19.0.14
/tag_slave_7 - 172.19.0.9
/tag_slave_6 - 172.19.0.7
/tag_slave_4 - 172.19.0.8
/tag_slave_3 - 172.19.0.6
/tag_slave_2 - 172.19.0.5
/tag_slave_5 - 172.19.0.4
/tag_slave_1 - 172.19.0.3
/master - 172.19.0.2

Note:

Eventhough all these containers are running in the same custom network and know each other by their name (example, tag_slave_1), docker by default appends the project name and index with an underscore (_) to the slave machines when we issue the scale command. Java RMI somehow does not like _ in the hostname which causes issues while running the test in the distributed mode. So we use IP address.

Running JMeter test:

sudo docker exec -it master /bin/bash
cd /jmeter/apache-jmeter-2.13/bin
wget https://s3-us-west-2.amazonaws.com/dpd-q/jmeter/jmeter-docker-compose.jmx
./jmeter -n -t jmeter-docker-compose.jmx -R172.19.0.16,172.19.0.15..........
Creating summariser
Created the tree successfully using jmeter-docker-compose.jmx
Configuring remote engine: 172.19.0.16
Configuring remote engine: 172.19.0.15
Configuring remote engine: 172.19.0.17
Configuring remote engine: 172.19.0.13
Configuring remote engine: 172.19.0.14
Configuring remote engine: 172.19.0.11
Configuring remote engine: 172.19.0.12
Configuring remote engine: 172.19.0.9
Configuring remote engine: 172.19.0.10
Configuring remote engine: 172.19.0.8
Configuring remote engine: 172.19.0.7
Configuring remote engine: 172.19.0.6
Configuring remote engine: 172.19.0.5
Configuring remote engine: 172.19.0.4
Configuring remote engine: 172.19.0.3
Starting remote engines
Starting the test @ Sat Sep 24 16:17:22 UTC 2016 (1474733842116)
Remote engines have been started
Waiting for possible shutdown message on port 4445
summary + 6016 in 8s = 795.6/s Avg: 0 Min: 0 Max: 2 Err: 0 (0.00%) Active: 45 Started: 33 Finished: 0
summary + 132200 in 30s = 4405.9/s Avg: 0 Min: 0 Max: 6 Err: 0 (0.00%) Active: 150 Started: 138 Finished: 0
summary = 138216 in 38s = 3679.2/s Avg: 0 Min: 0 Max: 6 Err: 0 (0.00%)
summary + 179100 in 30s = 5965.0/s Avg: 0 Min: 0 Max: 3 Err: 0 (0.00%) Active: 150 Started: 138 Finished: 0
summary = 317316 in 68s = 4694.6/s Avg: 0 Min: 0 Max: 6 Err: 0 (0.00%)
summary + 179100 in 30s = 5975.2/s Avg: 0 Min: 0 Max: 2 Err: 0 (0.00%) Active: 150 Started: 138 Finished: 0
summary = 496416 in 98s = 5088.0/s Avg: 0 Min: 0 Max: 6 Err: 0 (0.00%)
summary + 138980 in 24s = 5852.8/s Avg: 0 Min: 0 Max: 2 Err: 0 (0.00%) Active: 0 Started: 138 Finished: 150
summary = 635396 in 121s = 5237.7/s Avg: 0 Min: 0 Max: 6 Err: 0 (0.00%)
Tidying up remote @ Sat Sep 24 16:19:23 UTC 2016 (1474733963754)
... end of run
sudo docker-compose down
Stopping tag_slave_12 ... done
Stopping tag_slave_14 ... done
Stopping tag_slave_13 ... done
Stopping tag_slave_15 ... done
Stopping tag_slave_11 ... done
Stopping tag_slave_10 ... done
Stopping tag_slave_9 ... done
Stopping tag_slave_8 ... done
Stopping tag_slave_7 ... done
Stopping tag_slave_6 ... done
Stopping tag_slave_4 ... done
Stopping tag_slave_3 ... done
Stopping tag_slave_2 ... done
Stopping tag_slave_5 ... done
Stopping tag_slave_1 ... done
Stopping master ... done
Removing tag_slave_12 ... done
Removing tag_slave_14 ... done
Removing tag_slave_13 ... done
Removing tag_slave_15 ... done
Removing tag_slave_11 ... done
Removing tag_slave_10 ... done
Removing tag_slave_9 ... done
Removing tag_slave_8 ... done
Removing tag_slave_7 ... done
Removing tag_slave_6 ... done
Removing tag_slave_4 ... done
Removing tag_slave_3 ... done
Removing tag_slave_2 ... done
Removing tag_slave_5 ... done
Removing tag_slave_1 ... done
Removing master ... done
Removing network tag_vins

Summary:

We learnt few basic and important commands of docker-compose. Docker along with compose saves us a lot of time in setting up the load testing infrastructure quickly. With scale command we can create any number of jmeter-slave instances we need.  With a single command, we bring the entire application up and running or stop and remove them.

Note: I actually create all the containers on a single host as part of this article. This setup would be helpful to test your scripts in your local machine before doing the actual performance testing. We would be creating one container per host for actual performance testing. Please check the  article here – JMeter – Distributed Load Testing using Docker + RancherOS in Cloud

Happy Testing 🙂

 

Share This:

Exit mobile version