Three ‘A’ to be success in Rummy… Three ‘S’ to be success in Performance Testing

Hello Reader,


Are you new to performance testing? No worries. Have a look into my blog on performance testing you will learn something.

Are you good/strong in performance testing? Still have a look into my blog you can recall the concepts  “On return I just need your smile that would be glad to me.”


As you know performance testing is growing rapidly in current industry, this blog might be useful to you.

In previous years most of the companies haven’t recognize much, but now trend is changing, mentality of the people is changing, Companies are recognizing the usage and importance of this.

Now it’s time to show our performance on Performance testing.


I hope my blog is useful enough to boost your testing skills.

What is Performance Testing?

Performance Testing is a process of evaluating system’s behavior under various extreme conditions.

The main intent in Performance testing is monitoring and improving key performance indicators such as response time, throughput, hits/sec, memory, CPU utilization etc.


There are three objectives (three ‘S’)
Speed
Scalability
Stability

Performance testing evaluates the system. It identifies the bottleneck and critical issues by implementing various workload models.
Here I’m going to explaining some brief about performance testing end to end process.

Organization Environment:
Here in the organization environment we have following phases that are mentioned below.

Phase 1.DEVELOPMENT (The Application is developed in to modules)
Phase 2.USER ACCEPTANCE TESTING (All the modules are integrated in to single product)
Phase 3.QUALITY CONTROL (Functional testing)
Phase 4.PERFORMANCE TESTING ENVIRONMENT(Performance testing )
Phase 5.PRODUCTION (End users may use this application)

Scenario Design (Requirement Gathering):

We will have multiple discussions with client and will understand the application
Architecture , server configuration and his application.
Then we will identify the business scenarios based on the following parameters.
1. The scenario that is being used by most of the users.
2. The scenario which generates more revenue to the client.
3. The scenario which is critical to the application.
Then, we will decide the number of users and types of performance testing.

Types of Performance Testing:

The types of performance testing are

Load Testing: 

The test execution with a specified user load under given conditions.
Major focus will be on system stability, response times and success and failures.

Stress Testing: 

The test execution where we continuously increase the load on the
Application until it crashes. Major focus will be on the threshold limit of users and
Response time is not at all considered.

Endurance Testing: 

The test execution where we load the application continuously
for longer duration’s like days or months to identify bottlenecks like memory leaks.

Spike Testing: 

The test execution where the application is loaded with varied user
load suddenly (sudden increase and decrease) to check the system behavior.

Volume Testing: 

The test execution where we send the continuous data to the db
to check whether it can handle that much volume of data at a time.

Test Plan:

Here is the sample test plan with detail description of our activities and deliverables of this
project.

  • The test plan consists of the following.
  • Application Overview
  • Application Architecture
  • Scenarios in detail (step by step description)
  • User load
  • Test schedule
  • Server monitoring (metrics)
  • Deliverable’s (test plan , automation scripts, Analysis, reports)
  • Assumptions, Limitations
  • Entry criteria & Exit criteria

Performance Testing Phases:

During the performance testing we have 3 phases.

  • Scripting
  • Scenario Design
  • Results

Scripting:

Recording: Once we get the sign off from the client, we will start with scripting.
Since ours is a web application, we record with web (HTTP/HTML) protocol, If our app is a java oriented then we will go with java over HTTP.

So, depends on the application we will choose the protocol.


Protocol: It has some set of rules and regulations, which provides communication between client and server.


After capturing the script we need to customize the script as below possible ways.
Customization:

Correlation:


What? Capturing the dynamic data that is being generated by the server is called
correlation.
Why? To simulate the real user actions and real sessions, if not correlated, the scripts may
Fail

Parameterization:


What? Sending different user inputs for different users
Why? To simulate the real user actions, to ensure that the results are not fetched from
cache, to stress the db.

Page Validation:


What? To verify a static text of a particular web page
Why? When executed for multiple users, it is not possible to verify whether all the pages
are successful or not using the “Browser Display”.

Test Execution (Scenario design):

Once scripting is ready, we will go for test execution.
The below are the activities performed before we start the actual test:


1. Configure the scenario
2. Configure the monitoring
3. Define SLA’s

Design tab:

Here, we will set the range of user load and run the tests.
Other settings in design tab are
1. Load generators (we can add different load generators by giving the LG installed m/c ip’s)
2. Add Vusers (we can add the users depends on the load architecture)
3. Add Groups: we can add the scripts
4. Detail: details of a script
Once scenario completion based on the client requirement we need to start running the scenario. After finishing need to take the test results.

Analysis (Results):

Once the test is completed, we will see the test results based on this will come to know how the application behavior.

The default analysis summary contains the details like the test duration, total throughput,average throughput, total hits and average hits per second, passed transactions, and failed transactions, the min, max, avg and 90% of the response times.

Analyzing the test results and finding bottlenecks:

This section gives a brief idea on how to start the analysis of the test results. Once the test is completed, the test results should be divided into three sections:
1. Till the normal and expected behavior
2. From where the response times, started increasing exponentially and started seeing errors
3. From where the errors increased drastically to till the end of the test.

Is Your Application Secure? We’re here to help. Talk to our experts Now

Inquire Now


Bottlenecks:
Here I am mentioning one sample bottleneck which I have faced in my project.

In .net application, Gen0 should collect all objects during garbage collection.
1/10th of GC0 should be collected at GC1 and 1/10th of GC1 should be at GC2
In my application GC1 has collected around 1/3rd of GC0.
So, I inferred that garbage collection has not done properly.
I suggested them to do application profiling for the scenario and find out the objects that were not properly garbage collected. Application profiling tools are
JProbe – Java
CLR – .net
These profilers will show us all the objects that were being created during the scenario and also show the garbage collection details of these objects.
With this, we can find how the garbage collection has happened.

Conclusion:

Have you read my complete blog? Then you are ready to perform the Performance Testing. Go rock the world . Thank you.