Top 20 Performance Testing Interview Questions

If you are planning to start your career as a performance tester, Below are a few most commonly asked Performance Testing Interview Questions. These questions were asked in some top MNCs like Oracle, TCS,  Accenture, Infosys, Wipro, HCL, Capgemini, etc.

Q1. What are the different types of performance testing?

Ans: There are six major types of Performance Testing:

1. Load Testing: The purpose of load testing is to identify application performance under a certain fixed load.

2. Stress Testing: The goal of stress testing is to identify the breakpoint of an application.

3. Endurance Testing: Endurance test is executed over a long period of time ( Eg: 4 hrs, 8hrs, 1 day, etc.) to identify the issues like memory leak in an application.

4.Spike Testing:  The purpose of spike testing is to identify the application performance under a sudden increase or decrease in load. 

5. Failover Testing: Failover testing is a technique to verify the system's ability to provide extra resources and the ability to move to back-up systems during the system's failure due to one or the other reasons.

6.Volume Testing: Volume testing is done to analyze the system performance by increasing the volume of data in the database.

Q2. What is the difference between stress and endurance testing?

This is one of the most asked performance testing interview question.

Ans: A stress test is executed to identify the breakpoint (Maximum amount of load an application can sustain) of an application however endurance test is executed over a long period of time (4hr, 8hr, 1 day, etc.) to identify the issues like memory leak.

Q3. how do you define the duration for an endurance test?

Ans: The duration of an endurance test is defined based on the usage pattern of end-users. 

1. If the end-users will use the application under test (AUT) during standard office hours only, then we can set the endurance test duration as 8 hours.

2. if the application under test (AUT) will be used globally, then we can set the duration of an endurance test as one day or even one week.

Q4. What is capacity testing and capacity planning?

Ans: The capacity test is a test to determine how many users your application can sustain with the given hardware. These kinds of tests are generally executed to identify hardware limitations.

Capacity test results are helpful in capacity planning.

For example, We executed a capacity test on an application server having 8GB RAM and observed that the server can sustain a maximum of 100 user's load.
Based on the above results, If we are planning to have 400 users future, We would need a total of 4 servers. 

Q5. What are the phases of the performance testing lifecycle?

Before you appear for a performance testing interview, You must know the phases of the performance testing lifecycle.

Ans: Below are the phases of Performance Testing Lifecycle:

  • Requirement Gathering: In this phase, We gather the non-functional requirement of the application.
  • Test Planning: In this phase, we prepare the test plan and test strategy document.
  • Test Design: In this phase, we design the test scripts and scenarios.
  • Test Executions: In this phase, we execute planned load tests, stress tests, endurance tests, etc.
  • Analysis: In this phase, we analyze the performance test results.
  • Reporting: In this phase, we prepare the performance test reports.

Q6. What information you gather in the requirement gathering phase?

Ans: In this phase, we generally share a non-functional requirement questionnaire with the client and ask for the below details:

1. Application Details: Application Name, Type (Mobile/Web/Desktop), Application Accessibility (Intranet/Internet) etc.

2. Environment Details: Information about Web, App, and DB servers.

3. Workload Details: No of users, Workflows, Expected transaction per hour (TPH), etc.

4. Performance Targets and SLA: Targeted SLA (Service Level Agreement) for transaction response time, API response time, CPU utilization, and Memory utilization.

Q7. How will you decide the number of users for a load test?

Ans: Generally, we get this information from the client during the requirement gathering phase of the performance testing lifecycle. 

However in some cases, if the client does not have this information, We use the following approach to decide the number of users for a load test.

  • First, we check if the application is already running in production. If yes, we can get the current production user count and behaviour using APM/Analytics tools like Appdynamics, Dynatrace, NewRelic, Google Analytics, etc.
  • If the application under test (AUT) is a new application and going live for the first time, we can ask the client if they have any other similar application in the production from which we can extract the data.
  • If the client does not have any other similar application, we can follow the incremental model like first test the application with 100 users, then 200 users, then 300 users, till the application reach its threshold. 

Q8. What are the components of a performance test plan?

Ans: Below are the components of the performance test plan:

1. Performance Test Objectives: For each business process, module, or application, the objectives are listed and defined here. The objectives stem from the anticipated workload, change requests, or performance requirements. At a high level, this section list speak users, number of transactions, and response times for normal and peak loads

2.Test Scope: The scope contains a detailed breakdown of business processes and the load mixing, if applicable. This section also outlines the component or processes that are out of scope for this performance test, as well as any performance testing types that will not be included.

3. Acceptance Criteria: This section details the high-level requirements mentioned in the Objectives. It defines the normal and peak loads, the expected transaction, and response times for each applicable component in the scope of the test. The number of users, number of transactions, or report processing criteria per minute/hour/day are listed.

4.  Test Approach: The largest section of the document, the Test Approach defines the process, timing, testing scenarios, test script creation and validation, and testing location for each performance test type in scope, such as Benchmarking Test, Integrated Test, and Stress/Soak Test. The hardware details and comparison of the production and test environments are listed, and all performance testing tools and associated monitoring processes are defined in this section as well. This section also contains the process for handling defects, error statistics, and test results documentation.

5. Test Schedule: For each testing activity, the start date, end date, and required support are listed in a table.

6. Entry and Exit Criteria:  This section is a list of all activities that must be performed prior to executing the performance test and the criteria to be met prior to considering the performance test complete. It also lists the persons or teams responsible for each activity.

7. Deliverables: This lists any planned deliverables and descriptions along with those responsible for completing and delivering them.

8. Risks, issues, assumptions, dependencies: All performance limitations, associated risks and mitigation efforts, assumptions, and dependencies are described in these sections.

Q9. What is little's law?

Ans: Little's Law is very useful to define the workload model in performance testing. According to the Little's Law:


N = No of users (Expected virtual users)

TPS= Transaction per second. (if you have TPH-Transaction Per Hour value, divide it by 3600 to convert into TPS).

RT= Total execution time of a script(This include an end to end execution time of script without think time and pacing

TT= Total Think Time in the script (Sum of think time between all the transactions) 

PT= Pacing Time (It is pause time between two iterations, After the end of the previous iteration and before the start of the new iteration) 

Get the detailed answer here

Q10. How will you decide the number of load generators required for a test?

Ans: One of the best way to find out the load generator capacity with respect to a test is to run a small test (maybe 5 min) with 5 or 10 users and capture the load generator memory and CPU utilization details and use that information to calculate the no of load generator required. 

How? For example, You ran a test with 10 users for 5 min and observed that LG's memory utilization is 200 MB (memory utilization during test minus memory utilization before starting the test which is memory reserved by OS). It means your 10 users are consuming 200 MB of your RAM (i.e 20 MB per user)

Once we have the above information, We can use the below formula to calculate the capacity of one load generator.

Number of users(N) = (75 % of (Total memory of LG - Memory reserved by OS) )/ Memory utilized by one user.

Number of load generators = Total no of users/ users per load generator.

Get the detailed answer here

NOTE: This is Part-1 of the performance testing interview questions. In the upcoming series, we will add some tool specific questions like JMeter Interview Questions, Loadrunner Interview Questions, Neoload Interview Questions, etc. 

Q11. What is 90th Percentile, How to calculate it?

Ans: For example, You executed your load test script for 10 iterations and got the below response time for a transaction:

Iteration1: 8 sec
Iteration2: 3 sec
Iteration3: 5 sec
Iteration4: 2 sec
Iteration5: 4 sec
Iteration6: 6 sec
Iteration7: 10 sec
Iteration8: 9 sec
Iteration9: 7 sec
Iteration10: 1 sec

Now, If you sort the response time values in ascending order it will be 1,2,3,4,5,6,7,8,9,10.

Now, the 9th value out of 10 (like 90th out of 100) is your 90th percentile. In or case, the 90th percentile value is 9.

Q12. Why do we consider the 90th percentile over average response time?

Ans: There are some cases when we observe sudden spikes in response times which increase the average response time value and it gives an inaccurate image of application performance to the client.

Lets, Understand with an example. 

For example, we ran a load test script for 10 iterations and the response time of each iteration is as below:

Iteration1: 4 sec
Iteration2: 3 sec
Iteration3: 3 sec
Iteration4: 4 sec
Iteration5: 4 sec
Iteration6: 5 sec
Iteration7: 4 sec
Iteration8: 41 sec
Iteration9:  3 sec
Iteration10: 4 sec

Let calculate the average response time value.

(4+3+3+4+4+5+4+41+3+4)/10 = 7.5 seconds

The transaction response time for most of the iterations was between 3-5 seconds but due to one sudden spike average response time reached 7.5 seconds which gives us an inaccurate image of application performance. 

However, the 90th Percentile value for the above example is 5.

Please refer to Q.11 to understand how to calculate the 90th percentile. 

Q.13 What is the difference between baseline and benchmarking?

Ans: Baseline testing is the process of running a set of tests to capture performance information. whereas Benchmark testing is the process of comparing application performance with respect to an industry-standard that is given by some other organization.

Baseline Testing: When an application is tested for the first time, we capture the performance metrics like response time, throughput, CPU utilization etc and use it as a Baseline is for future tests to compare with.

Benchmark Testing: Benchmark testing is used to test and verify the application performance against industry standards.

Q.14 What is the difference between concurrent and simultaneous users?

Ans: Concurrent Users: When multiple users access a website but perform different-different actions at the same time is called concurrent users.

Simultaneous Users: When multiple users access a website and Perform the same action at the same time is called simultaneous users.

The rendezvous point in Loadrunner is used to simulate the simultaneous users on the application.

Q.15 What is hit/sec?

Ans: Hit per seconds in performance testing refers to the number of HTTP requests sent to a web server in one second. 

Hit per second is different from transaction per second. We can have multiple requests in one single transaction.

Q.16 What is throughput in Performance Testing?

Ans: In simple words, throughput means the rate by which the server process the user requests. it can be interpreted in multiple ways in performance testing depending on the tool.

In LoadRunner, It's the amount of data sent by the server in one second.

In JMeter, It's the number of transactions executed in one second. 

Q.17 What is the relation between hit/sec and throughput in performance testing?

Ans: Generally, Hit/sec and throughput are directly proportional. If the hit/sec increases, the throughput will also increase.

However, if the server reaches its maximum processing capacity and we still increase the hit/sec, it may result into constant or lesser throughput.

Q.18 Explain the different response codes.


200 - OK: The Response has succeeded 

302 - Found: This response code means that the URI of the requested resource has been changed temporarily. Further changes in the URI might be made in the future.

400 - Bad Request: It means that the server is unable to understand the request due to its invalid syntax.

401 - Unauthorised: It means that the client must authenticate itself to get the requested response. We mostly get this response code during the scripting phase if we pass an invalid or wrong token/user credentials to any request.

403 - Forbidden: The client does not have access to the requested resource. unlike 401, the client's identity is known to the server.

404 - Not Found: The server is unable to find the requested resource.

405 - Method Not Allowed: Server sends this response code if we use wrong mathod in our request. Ex: if we use GET instead of POST method while posting any data to the server.

500 - Internal Server Error: The server has encountered a situation that doesn't know how to handle.

502 - Bad Gateway: The server throws this error when it gets an invalid response while working as a gateway.
503 - Service Unavailable: It means that the server is not ready to handle the request. The server is down due to overload or for maintenance. 

504 - Gateway Timeout: If the server is acting as a gateway and does not get the response on time from another server.

Q.19 What is the difference between 401 (Unauthorised) and 403 (Forbidden) response codes?

Ans:  Unauthorised or 401 error code means that the server is unable to identify you. It may be due to invalid credentials.
Eg: If you try to login into the application with incorrect credentials, the server will throw 401 error.

Forbidden or 403 error code means that your credentials are valid and the server is able to identify you but, you do not have access to the requested resource.
Eg: You login to the application with the correct credentials but you try to access the admin page without admin rights.

Summary: Thanks for reading the article. This is part-1 of Performance Testing Interview Questions. In upcoming posts, I will share some more questions asked in most of the performance testing interviews, specific to tools like JMeter, Loadrunner, Neoload, Appdynamics, etc.


If you have any doubt or suggestion, please let me know

Powered by Blogger.