A Jmeter Test Plan must have listener to showcase the result of performance test execution. Listeners capture the response coming back from Server while Jmeter runs and showcase in the form of – tree, tables, graphs and log files. It also allows you to save the result in a file for future reference. There are many types of listeners Jmeter provides. Some of them are: Summary Report, Aggregate Report, Aggregate Graph, View Results Tree, View Results in Table etc.
In this article we will discuss about Summary Report Listener. It contains a table where a row is created for each request from your test. Aggregate Report serves the same purpose but benefit of Summary report is that it consumes less memory.
For Demo purpose we already have recorded few pages of Testing Journals and we will run that script with 10 users. Before we start the execution, let’s see how to add Summary report to Jmeter Test Plan.
How to Add Summary Report?
To add Summary Report: Right Click on Thread Group > Add > Listener > Summary Report.
Now, let’s understand few components of Summary Report.
Name: Here you can mention the the name of the Summary report.
Read result from file: If you already have the execution report exported from your previous Test runs, you can browse the file and Summary Report will load result in the table.
Log/Display only: Before starting the execution you can mark this option as “Errors” or “Successes” or none. Summary Report will show the result accordingly in the table.
Save Table Data: This filed is displayed at the bottom of Summary Report screen. You can export the execution result from the table into CSV file and it gives you option to include/omit “Group name in table” and “Table Header”.
Understanding Summary Report Table:
Now we will look at the columns of Report table. Let’s understand the significance of different columns:
Label: It is the name/URL for the specific HTTP(s) Request. If you have selected “Include group name in label?” option then the name of the Thread Group is applied as the prefix to each label.
#Samples: This indicates the number of virtual users per request.
Average: It is the average time taken by all the samples to execute specific label. In our case, average time for Label 1 is 942 milliseconds & total average time is 584 milliseconds.
Min: The shortest time taken by a sample for specific label. If we look at Min value for Label 1 then, out of 20 samples shortest response time one of the sample had was 584 milliseconds.
Max: The longest time taken by a sample for specific label. If we look at Max value for Label 1 then, out of 20 samples longest response time one of the sample had was 2867 milliseconds.
Std. Dev.: This shows the set of exceptional cases which were deviating from the average value of sample response time. The lesser this value more consistent the data. Standard deviation should be less than or equal to half of the average time for a label.
Error%: Percentage of Failed requests per Label.
Throughput: Throughput is the number of request that are processed per time unit(seconds, minutes, hours) by the server. This time is calculated from the start of first sample to the end of the last sample. Larger throughput is better.
KB/Sec: This indicates the amount of data downloaded from server during the performance test execution. In short, it is the Throughput measured in Kilobytes per second.
Hope you found this article informative. Want to discuss more in detail? Let’s discuss in the comment section below.