Tuesday 26 May 2015

Internal Architecture of LoadRunner tool


Internal Architecture of LR tool
VuGen stores and retrieves a vugen.ini file in the Windows folder. Vu scripts can be coded to use variable values obtained from parameter files external to the script. In the QTWeb.lrp file which is available within the LoadRunner's  dat\protocols folder section [Vugen], add entry MaxThreadPerProcess=5 to limit the number of threads managed by each load generator mdrv.exe process.
Application servers under test are placed under stress by driver processes mdrv.exe (the Multi-threaded Driver Process) and r3vuser.exe which emulate application clients such as Internet Explorer web browser. It performs 3 main actions: cpp (C language pre-processor), cci (C pre-compiling) which creates a file with ci file, and execute using the driver for the protocol technology being tested.
Runs can be invoked to run "silently" by invoking Mdrv.exe from a Windows batch script. Virtual Vusers are invoked as groups (logical collection of virtual users running the same script on a specific load generator machine) by agents (3,900K magentproc.exe) running as a service or as a process on load generator client machines.
Each machine hosting agents maintains an Execution Log in a .qtp file. When logging is enabled, the agent also creates within the results folder a sequential log file for each Vuser (segregated by Vuser group). During execution, this file is displayed in the view > Show Output window on the LoadRunner Controller machine.
Agents are launched by the Remote Agent Dispatcher process (formerly called Remote Command Launcher (RCL)) on each load generator machine. Each agent refer to scenario (.lrs) definition files to determine which Vuser groups and scripts to run on host machines.
The Controller is invoked using parameter values within files in the Windows OS folder (WINNT for Windows 2000 and WINDOWS for Windows XP). The Windows folder is used because LoadRunner is designed to have only one instance of Controller running at a time on a machine. The Controller (wlrun.exe) sends a copy of scenario files along with the request. Upon a pre-set delay, the Scheduler running on a Controller machine instructs agents (via Windows port 54345 or dynamic UNIX port) to initiate test session scenarios.
During a run, execution results are stored to a results folder. Best practice to set Results Settings to "Automatically create a results directory for each scenario execution." which means that LR will increment the name of the Results Name when we start a scenario runs. For example, a value of "Res11" will be automatically incremented to "Res12" or sometimes "Res11-1". Errors are written to the output.mdb MS Access database.
Within each results folder, a "Log" folder is automatically created to contain a log file for each group. After a run, to view a log file from within the Controller, click then right-click on a group to select "Show Vuser Log".  As a scenario is run, monitors maintain counters locally on each host.
After a run, the "collate" process takes .eve and .lrr result files and creates in the results folder a temporary .mdb (MS-Access) database.
The Analysis Module (8,320K analysisu.exe)
It generates analysis graphs and reports using data from the .mdb database. The LoadRunner Results file results_name.lrr from each scenario run -- also called an Analysis document file -- is read by the Analysis program to display Percentile graphs.
By default, the LRReport folder is created in the test analyst's local machine My Documents folder to store Analysis Session files. They can optionally be formatted in HTML. Their format are controlled by a .tem template file.

Diagram of Overview of LR tool internal Architecture


Monday 25 May 2015

Functional Testing Vs Non Functional Testing

System Testing:


Before actually implementing the new system into operations, a test run of the system is done removing all the bugs, if any. It is an important phase of a successful system. After codifying the whole programs of the system, a test plan should be developed and run on a given set of test data. The output of the test run should match the expected results. Sometimes, system testing is considered as a part of implementation process.

Using the test data following test run are carried out:
Program test
System test
Program test : When the programs have been coded and compiled and brought to working conditions, they must be individually tested with the prepared test data. All verification and validation be checked and any undesirable happening must be noted and debugged (error corrected).
System Test : After carrying out the program test for each of the programs of the system and errors removed, then system test is done. At this stage the test is done on actual data. The complete system is executed on the actual data. At each stage of the execution, the results or output of the system is analyzed. During the result analysis, it may be found that the outputs are not matching the expected output of the system. In such case, the errors in the particular programs are identified and are fixed and further tested for the expected output. All independent modules be brought together and all the interfaces to be tested between multiple modules, the whole set of software is tested to establish that all modules work together correctly as an application or system or package.
When it is ensured that the system is running error-free, the users are called with their own actual data so that the system could be shown running as per their requirements.

vFunctional testing refers to activities that verify a specific action or function of the code. These are usually found in the code requirements documentation, although some development methodologies work from use cases or user stories. Functional tests tend to answer the question of "can the user do this" or "does this particular feature work."
vNon-functional testing refers to aspects of the software that may not be related to a specific function or user action, such as scalability or other performance, behavior under certain constraints, or security. Testing will determine the breaking point, the point at which extremes of scalability or performance leads to unstable execution. Non-functional requirements tend to be those that reflect the quality of the product, particularly in the context of the suitability perspective of its users.
vPerformance testing falls under Non Functional Category, is generally executed to determine how a system or sub-system performs in terms of responsiveness and stability under a particular workload. It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage.
vLoad testing is primarily concerned with testing that the system can continue to operate under a specific load, whether that be large quantities of data or a large number of users. This is generally referred to as software scalability. The related load testing activity of when performed as a non-functional activity is often referred to as endurance testing. Volume testing is a way to test software functions even when certain components (for example a file or database) increase radically in size. Stress testing is a way to test reliability under unexpected or rare workloads. Stability testing (often referred to as load or endurance testing) checks to see if the software can continuously function well in or above an acceptable period.
vThere is little agreement on what the specific goals of performance testing are. The terms load testing, performance testing, scalability testing, and volume testing, are often used interchangeably.

Areas to do Resilience Testing?

The following areas we can do resilience testing to make sure that system can recover from unexpected or unexpected events without loss of data or functionality.

  • Interruption via network servers: Simulate or initiate communication loss with the network (physically disconnect communication wires or power down network servers or routers).
  • Testing for the following conditions requires that a known database state be achieved: Several database fields, pointers, and keys should be corrupted manually and directly within the database (via database tools).
  • Recovery testing is highly intrusive. Procedures to disconnect cabling (simulating power or communication loss) may not be desirable or feasible.
  • Use of redundant hardware devices (e.g., servers, processors, disks), which are arranged such that one component immediately takes over from another should it fail. Disks, for example, can be included in the architecture as a RAID element (Redundant Array of Inexpensive Disks).

What is the approach for Resilience Testing

The following steps describe the Resilience Testing Approach at high level.
  1. Publish the test schedule of the week
  2. setup a call / providing heads up to infrastructure / IT / Deployment team a day advance to walk them through the test scenario
  3. You have to identify the area focus of resilience evaluation
  4. Identify critical functionalities  and/or scripts will target the focus area.
  5. prepare the test steps to attack the focussed area
  6. Ensure that Peformance testing team is familiar with the objectives of the test and test steps
  7. Do dry runs before the main test in order to check whether the environment is stable.
  8. Involve in log analysis for the test investigation.
  9. Ensure valid data is captured during the test. Audit test report.
  10. Test report validation after preparation and validation of it meeting test objectives.
  11. Results analysis and publish the compiled report to the supporting team.

What is Resilience Testing?

Introduction to Resilience Testing?

Resilience testing confirms that the system recovers from expected or unexpected events without loss of data or functionality.

Events can include shortage of disk space, unexpected loss of communication, or power out conditions.

A dedicated environment should be available to carry out various types of resilience tests at any time during the business hours.

Resilience Testing ensures that there are no single points of failure and that the system remains available when running in error conditions. The application should be able to continue with component failures example network failure, database failure, while issuing appropriate error messages wherever required.

Failover and recovery testing ensures that the target-of-test can successfully failover and recover from a variety of hardware, software or network malfunctions with undue loss of data or data integrity.

For those systems that must be kept running failover testing ensures that, when a failover condition occurs, the alternate or backup systems properly “take over” for the failed system without any loss of data or transactions.

Evaluation criteria for selecting Load /Performance Testing tool

As a performance test engineer you need to consider below key points while selecting the appropriate performance testing tool to provide the accurate results to the customer.
Evaluation Criteria:
Before starting with any performance testing it is very common to evaluate various tools commercial and free tools available in the market and choose the best suited one for the application under test. The following are the criteria to be considered while evaluating tools for performance testing and a methodology for evaluating the tools.
ü  Ease of Script development/enhancement
Ø  Load testing tool should have a flexibility for developing script easily and enhancing the script.
ü  Protocol Support:
Ø  Load testing tools either emulate load or simulate users. Simulation involves duplication of actual user activity on the GUI/front-end with the intention of replaying it again. Emulation involved protocol replay - for this a rich API of protocol-related functions are required.
Ø  A good tool will not be restricted to a single protocol and should support multiple protocols including HTTP/S, FTP, SMTP, Oracle NCA, DB2 CLI, Citrix ICA, SAP, WAP, Voice XML, Peoplesoft, and SIEBEL.
ü  Record & Playback support :
Ø  This category details how easy it is to record & playback a test.
Ø  Is there object recognition when recording and playing back or does it appear to record ok but then on playback (without environment change or unique id’s, etc changes) fail?
Ø  How easy is it to read the recorded script?
ü  Cost of Licensing - Tool should not be expensive and should have a flexible licensing option.
ü  Strong scripting language :
Ø  The scripting language supported by the automated tool should be understandable and precise. It should not generate lengthy codes which are difficult to maintain. It should resemble to common languages like C, C++, and Java etc. It should be easy to debug.
Evaluation Procedure:
Select the tools that you want to consider for evaluation. The most common load testing tools are HP Loadrunner, JMeter, Web Load and Neo Load and etc…
For each of the tool considered for evaluation identify the pros and cons with respect to each of the criterion listed above.
Give a score for each of the tools for each of the criterion.
Prepare a Score card of all the tools for each of the criterion considered. 
Recommend the best suited tool to the customer.

Important phases of the performance testing

Phase-I: Analyzing Performance Requirements                                     
The below activities are being performed as part of this phase from the performance testing perspective.
Activities:
  1.  Kick Off meetings with QA, Infra team, Network team, DBAs, Development, Architecture and Customer.
  2. Perform Initial analysis to identify understand project status, schedule and existing testing practices.
  3. Understand product engineering and support organization structure and processes
  4. Complete Performance questionnaire
  5. Identify goals of performance testing
  6. Research User profiles
  7. Study system under test and understand Application Architecture & Network topology
  8. Prepare Understanding Document
  9. Identify Monitoring requirements
  10. Create performance test requirements document
  11. Identify Type of test needed
  12. Identify test scenarios
  13. If required perform Tool comparison analysis, Tool recommendation & procurement
  14. Prepare Test Strategy
  15. Review & Signoff Test strategy
  16. Standard required information for performance testing includes:
    1.  Anticipated # of total users
    2.  Anticipated # of concurrent users
    3. Anticipated volume of transactions
    4. Application, system, & network architecture
    5.  User & Transaction response time requirements
    6. Test Type Requirements (Load, Stress, Volume, Burn-in, Fail Over, etc)
 Phase-II: Performance Planning
Below activates are being performed as part of the Planning phase from the performance testing perspective.
Activities:
  1. Create Performance Test Plan & Review and approval of Test Plan
  2. Prepare estimation document of team size, and effort.
  3. Establish logistics/team formation
  4.  Define roles and responsibilities
  5.  Identify key business process and Business Process Selection
  6. Prepare Workflow document
  7. Identify test data requirements
  8. Protocol Identification
  9. Conduct POC if required and Publish POC results, if POC is undertaken:
    1. A prototype application should be available at this time to evaluate and create a proof of concept that validates that performance testing CAN be accomplished on the system with the tools on hand. At this stage there should be a “go/no-go” decision made for the types of performance tests to use. This stage of testing does not require production grade hardware or the complete system as it is for POC.
  10.  Ensure Load generators capacity for number of virtual users
 Phase-III: Performance Script Design
Below activities are being performed as part of this phase from the performance testing perspective.
Activities:
  1.  System access
    1. All necessary security access points must be established prior to script recording as this is a manual process to begin with. This includes NT/UNIX/Mainframe system access, as well as application user accounts numbered as high as the anticipated total # of system users. (100 users == 100 test user ID’s).
  2.  Develop test scripts according to workflow document using identified test tool
  3. Ensure  scripting standards are followed
  4. Conduct Script review
  5. Enhance scripts and plan script migration if required
  6. Ensure volume of test data
  7. Test Environment set up 
    1. The Test Environment must be completed prior to executing any performance tests. This includes having the necessary hardware and software implemented in a simulation of what the production system will entail. Production grade hardware should be used to accurately size the application.
  8. Conduct dry run of all test scripts
  9. Publish dry run test results
  10. Prepare script delivery document
  11. Review execution schedule and scenarios
Phase-IV: Performance Test Execution
Below activates are being performed as part of this phase from the performance testing perspective.
Activities:
  1.  Generate and Integrate test data into scripts 
  2.  Build scenario according to work load model
  3. set up Run time settings
  4. Setup performance counters (Monitors)
  5. Execute shakedown test
  6. Ensure test readiness from all teams involved with the project
  7. Run Test Scenarios 
  8. Monitor high level system performance counters
  9. Capture metrics/ Log test session information
 The following counters need to be monitored while running individual performance tests for various transactions:
  • Processor Performance Counters
  • Process Performance Counters
  • Physical Disk Performance Counters
  • Memory Performance Counters
  • Network Performance Counters
  • System Performance Counters
Phase-V: Performance Results Analysis
The test results are reviewed and analyzed for any bottlenecks in the application by a team of experts for the various Hardware platforms, Operating systems, Databases and Software design.
Activities:
  1.  Create performance test results report
  2.  Publish report and Maintain error log 
  3. Review Performance test results with Project team
  4. Verify results with requirements
  5. Analyze reports for bottlenecks and recommend performance  improvement solutions
  6. Support performance tuning activity
  7. Re run Test scenarios
  8. Re run test scenarios until requirements are satisfied
  9. Project Closure

Friday 22 May 2015

Goals and Objectives of Performance Testing

Objectives / Goals of Performance Testing
Objectives:
  • Application Response Time, how long does it take to complete a task?
  • Reliability, how Stable is the system under a   heavy work load?
  • Configuration Sizing, which configuration provides the best performance level?
  • Capacity Planning, what H/W does the application supports?
  • Acceptance, is the system stable enough to go into Production?
  • Bottleneck Identification, what is the cause of degradation in performance?
  • Regression, does the new version of Software adversely affect response time?
Goals:
  1. Determine business transaction response time.
  2. Measure the Server Resources such as, CPU Usage, JVM Memory Heap and Disk Space, etc…
  3. Network Bandwidth (throughput) and Latency (delay) e.g. limited network throughput speeds serve to introduce a latency (delay) when transmitting larger amounts of data at specific location.
  4. Determine system (hardware/software) optimal configuration
  5. Verify current system capacity and scalability for future growth
  6. Determine how many users the system can support
  7. Determine if the application will meet its SLA
  8. Identify Server bottlenecks such as Memory leak, deadlocks and etc

What is Performance Testing?

Introduction to Performance Testing
ü  Performance testing means of quality assurance (QA). It involves testing software applications to ensure they will perform well under their expected workload.
ü  Performance testing is defined as the technical investigation done to determine or validate the speed, scalability, and/or stability characteristics of the product under test.
ü  The goal of performance testing is not to find bugs but to eliminate performance bottlenecks. A bottleneck is a stage in a process that causes the entire process to slow down or stop. Some common bottle necks are:
o    CPU utilization
o    Memory Utilization
o    Network utilization
o    Disk usage
o    Network delay.
o    Client side processing.
o    Database transaction processing.
o    Load balancing between servers.
o     Data rendering
ü  Determines the speed, scalability and stability characteristics of an application, thereby providing an input to making sound business decisions.
o    Speed - Determines whether the application responds quickly
o    Scalability - Determines maximum user load the software application can handle.
o    Stability - Determines if the application is stable under varying loads