Thursday, 11 August 2016

Some Important SQL Performance counters by DBA

Following table list out of some important SQL performance counters should be collected by
DBA against performance testing entity.

Performance Counter Name Comments
SQLServer:Access Methods - Full Scans / sec Value greater than 1 or 2 indicates that we are having table / Index page scans. We need to analyze how this can be avoided.
SQLServer:Access Methods - Page Splits/sec Interesting counter that can lead us to our table / index design. This value needs to be low as possible.
SQLServer:Access Methods - Table Lock Escalations/sec This gives us the number of times a table lock was asked for in a second. A high number needs a revisit to the query and the indexes on the table.
SQL Server:Buffer Manager - Database pages This number shows the number of pages that constitute the SQL data cache. A large changes in this value indicates the database is swapping cache values from the cache. We need to either increase the memory of the system or the max server memory parameter.
SQL Server:Buffer Manager - Procedure cache pages This indicates the number of procedures are present in the cache. This is the location where the compiled queries are stored.
SQL Server:Databases - Active Transactions The number of currently active transactions in the system.
SQL Server:Databases - Log growths The number of times the log files have been extended. If there is lot of activity in this counter we need to allocate static and large enough space for our log files.
SQL Server:Databases - Transactions/sec This number indicates how active our SQL Server system is. A higher value indicates more activity is occurring.
SQL Server:General Statistics - User Connections The number of users currently connected to the SQL Server.
SQL Server:Locks - Lock Requests/sec Number of requests for a type of lock per second.
SQL Server:Locks - Average Wait Time This is the average wait time in milliseconds to acquire a lock. Lower the value the better it is.
SQL Server:Locks - Number of Deadlocks/sec The number of lock requests that resulted in a deadlock.
SQL Server:Memory Manager - Optimizer Memory The amount of memory in KB that the server is using for query optimization. We need to have a steady value in this counter. A large variation in the value suggests there is lot of Dynamic SQL is getting executed.
SQL Server:Memory Manager - Connection Memory Amount of memory in KB used to maintain the connections.
SQL Server:SQL Statistics - SQL Compilations/sec The number of times per second that SQL Server compilations have occurred. This value needs to be as low as possible.

Thin Client Vs Thick Client


Thin Client
Thick Client
Mostly it is a web based client. Will be accessed through a common client called browser
It will be accessible through client software which is installed locally
Business logic will be there in middleware’s or application servers.
Business logic will be installed in the local machine.
When the URL is accessed most of the business logic will be executed on the server. And some client side processing and rendering happens within the browser.
When the client get installed all the business logic of the application installed locally
Thin client uses stateless connections. For each request connection will be opened and after the response it will be closed
Thick client uses dedicated connections
Thin clients are slower in response
• Connection needs to be opened explicitly for each request
• Pages needs to be downloaded from presentation layer
• Data needs to be retrieved from the DB
• Business logic related processing will be happening in the middle wares or application servers
Thick clients are faster in response
• most of the processing happens locally
• Connections will be closed only explicit logouts
Thin clients will be used by the external users. Ex: Bank customers
Thick clients will be used by the internal users most of the times: Ex: Bank employees and customer support executives

Tuesday, 12 July 2016

Important factors of Performance Testing


 
Following are some of the important points against performance testing:::
  1. We can setup global sampling rate, error handling, debugging and frequency settings in Monitor properties.
  2. In manual correlation, we can include maximum of 64 parameters per script. 
  3. By default, all Vusers information is stored in Vuser Host. 
  4. Granularity of the graph can be set in seconds, minutes and hours. 
  5. Online monitor can send '5 updates' to the controller for data graph.
  6. VuGen generates "icon and title" for representing all the actions done by virtual user.
  7. By default, 3sec is defined for the sample rate and we use seconds unit of sampling rate in the online monitoring.
  8. JSON, XML and BASE64 are the formats that contain valid VuGen built in DFEs

Saturday, 9 July 2016

Performance Testing Case Study - Important factors

Following methods and steps involved for doing end to end performance testing of web application:

  1. Understanding Physical architecture
  2. To know the current status of application
  3. Gather performance testing requirements
    1. Protocol Selection
    2. Identify scripting language 
  4. Evaluate the performance testing set up
    1. Network bandwidth settings
  5. Start implementing transaction scripting
    1. Recording scripts
    2. Replay scripts
    3. Correlation
    4. User data  parameterisation
  6. Scenario creation
    1. Iteration settings
    2. Ramp up / Ramp down users
  7. Performance test execution
    1. Mimic browsers
  8. Analysis

Tuesday, 26 May 2015

Internal Architecture of LoadRunner tool


Internal Architecture of LR tool
VuGen stores and retrieves a vugen.ini file in the Windows folder. Vu scripts can be coded to use variable values obtained from parameter files external to the script. In the QTWeb.lrp file which is available within the LoadRunner's  dat\protocols folder section [Vugen], add entry MaxThreadPerProcess=5 to limit the number of threads managed by each load generator mdrv.exe process.
Application servers under test are placed under stress by driver processes mdrv.exe (the Multi-threaded Driver Process) and r3vuser.exe which emulate application clients such as Internet Explorer web browser. It performs 3 main actions: cpp (C language pre-processor), cci (C pre-compiling) which creates a file with ci file, and execute using the driver for the protocol technology being tested.
Runs can be invoked to run "silently" by invoking Mdrv.exe from a Windows batch script. Virtual Vusers are invoked as groups (logical collection of virtual users running the same script on a specific load generator machine) by agents (3,900K magentproc.exe) running as a service or as a process on load generator client machines.
Each machine hosting agents maintains an Execution Log in a .qtp file. When logging is enabled, the agent also creates within the results folder a sequential log file for each Vuser (segregated by Vuser group). During execution, this file is displayed in the view > Show Output window on the LoadRunner Controller machine.
Agents are launched by the Remote Agent Dispatcher process (formerly called Remote Command Launcher (RCL)) on each load generator machine. Each agent refer to scenario (.lrs) definition files to determine which Vuser groups and scripts to run on host machines.
The Controller is invoked using parameter values within files in the Windows OS folder (WINNT for Windows 2000 and WINDOWS for Windows XP). The Windows folder is used because LoadRunner is designed to have only one instance of Controller running at a time on a machine. The Controller (wlrun.exe) sends a copy of scenario files along with the request. Upon a pre-set delay, the Scheduler running on a Controller machine instructs agents (via Windows port 54345 or dynamic UNIX port) to initiate test session scenarios.
During a run, execution results are stored to a results folder. Best practice to set Results Settings to "Automatically create a results directory for each scenario execution." which means that LR will increment the name of the Results Name when we start a scenario runs. For example, a value of "Res11" will be automatically incremented to "Res12" or sometimes "Res11-1". Errors are written to the output.mdb MS Access database.
Within each results folder, a "Log" folder is automatically created to contain a log file for each group. After a run, to view a log file from within the Controller, click then right-click on a group to select "Show Vuser Log".  As a scenario is run, monitors maintain counters locally on each host.
After a run, the "collate" process takes .eve and .lrr result files and creates in the results folder a temporary .mdb (MS-Access) database.
The Analysis Module (8,320K analysisu.exe)
It generates analysis graphs and reports using data from the .mdb database. The LoadRunner Results file results_name.lrr from each scenario run -- also called an Analysis document file -- is read by the Analysis program to display Percentile graphs.
By default, the LRReport folder is created in the test analyst's local machine My Documents folder to store Analysis Session files. They can optionally be formatted in HTML. Their format are controlled by a .tem template file.

Diagram of Overview of LR tool internal Architecture


Monday, 25 May 2015

Functional Testing Vs Non Functional Testing

System Testing:


Before actually implementing the new system into operations, a test run of the system is done removing all the bugs, if any. It is an important phase of a successful system. After codifying the whole programs of the system, a test plan should be developed and run on a given set of test data. The output of the test run should match the expected results. Sometimes, system testing is considered as a part of implementation process.

Using the test data following test run are carried out:
Program test
System test
Program test : When the programs have been coded and compiled and brought to working conditions, they must be individually tested with the prepared test data. All verification and validation be checked and any undesirable happening must be noted and debugged (error corrected).
System Test : After carrying out the program test for each of the programs of the system and errors removed, then system test is done. At this stage the test is done on actual data. The complete system is executed on the actual data. At each stage of the execution, the results or output of the system is analyzed. During the result analysis, it may be found that the outputs are not matching the expected output of the system. In such case, the errors in the particular programs are identified and are fixed and further tested for the expected output. All independent modules be brought together and all the interfaces to be tested between multiple modules, the whole set of software is tested to establish that all modules work together correctly as an application or system or package.
When it is ensured that the system is running error-free, the users are called with their own actual data so that the system could be shown running as per their requirements.

vFunctional testing refers to activities that verify a specific action or function of the code. These are usually found in the code requirements documentation, although some development methodologies work from use cases or user stories. Functional tests tend to answer the question of "can the user do this" or "does this particular feature work."
vNon-functional testing refers to aspects of the software that may not be related to a specific function or user action, such as scalability or other performance, behavior under certain constraints, or security. Testing will determine the breaking point, the point at which extremes of scalability or performance leads to unstable execution. Non-functional requirements tend to be those that reflect the quality of the product, particularly in the context of the suitability perspective of its users.
vPerformance testing falls under Non Functional Category, is generally executed to determine how a system or sub-system performs in terms of responsiveness and stability under a particular workload. It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage.
vLoad testing is primarily concerned with testing that the system can continue to operate under a specific load, whether that be large quantities of data or a large number of users. This is generally referred to as software scalability. The related load testing activity of when performed as a non-functional activity is often referred to as endurance testing. Volume testing is a way to test software functions even when certain components (for example a file or database) increase radically in size. Stress testing is a way to test reliability under unexpected or rare workloads. Stability testing (often referred to as load or endurance testing) checks to see if the software can continuously function well in or above an acceptable period.
vThere is little agreement on what the specific goals of performance testing are. The terms load testing, performance testing, scalability testing, and volume testing, are often used interchangeably.

Areas to do Resilience Testing?

The following areas we can do resilience testing to make sure that system can recover from unexpected or unexpected events without loss of data or functionality.

  • Interruption via network servers: Simulate or initiate communication loss with the network (physically disconnect communication wires or power down network servers or routers).
  • Testing for the following conditions requires that a known database state be achieved: Several database fields, pointers, and keys should be corrupted manually and directly within the database (via database tools).
  • Recovery testing is highly intrusive. Procedures to disconnect cabling (simulating power or communication loss) may not be desirable or feasible.
  • Use of redundant hardware devices (e.g., servers, processors, disks), which are arranged such that one component immediately takes over from another should it fail. Disks, for example, can be included in the architecture as a RAID element (Redundant Array of Inexpensive Disks).