Analysis of Performance Data
After you capture and consolidate your results, analyze the captured data and compare the results against the metric’s accepted level. If the results indicate that your required performance levels have not been attained, you can analyze and fix the cause of the bottleneck. The data that you collect helps you analyze your application with respect to your application’s performance objectives as mentioned below:
• Throughput versus user load.
• Response time versus user load.
• Resource utilization versus user load.
|
Acceptable Load Level |
|
|
Metric |
Accepted Level |
|
%CPU Usage |
Must not exceed 70% |
|
Throughput (Request/Sec) |
100 |
|
Response Time in Seconds |
2.5 |
Analyze the captured data to identify performance issues and bottlenecks. When you analyze your data, bear in mind the following points:
• The data you collect is usually only an indicator of a problem and is not the source of the problem. Indicators such as performance counters can give you directions to help isolate your debugging or troubleshooting process to target specific areas of functionality.
• Intermittent spikes in your data as shown by performance counters may not be a big concern. If it makes sense to, ignore inconsistencies.
• Make sure that your test results should not become abnormal due to exceeded warm-up time. Make sure that your test scripts run for a period of time before you start capturing metrics.
• If the data you collect is not complete, then your analysis is likely to be inaccurate. You sometimes need to retest and collect the missing information or use further analysis tools. For example, if your analysis of Common Language Runtime (CLR) performance counters indicates that a large number of generation 2 garbage collections are occurring, then you should use the CLR Profiler tool to profile the overall memory usage pattern for the application.
• You should be able to identify and isolate the areas that need further tuning. This assumes that you have already optimized your code and design for any changes, and that only the configuration settings need tuning.
• If you are currently in the process of performance tuning, then you need to compare your current set of results with previous results or with your baseline performance metrics.
• If, during your analysis, you identify several bottlenecks or performance issues, prioritize them and address those that are likely to have the biggest impact first. You can also prioritize this list on the basis of which bottleneck you hit first when running a test.
• Document your analysis. Write down your recommendations, including what you observed, where you observed it, and how you applied configuration changes to resolve the issue.
How to Identify Bottlenecks
The first step in identifying bottlenecks is to know the different tests and measurements that you must run to simulate varying user loads and access patterns for the application. The following measurements help you to expose bottlenecks and isolate areas that require tuning:
- Measure response time, throughput, and resource utilization across user loads.
1) Measuring Throughput across User Loads ( Throughput vs. User Load )
When you measure throughput across user loads, watch for the peak levels of throughput. At the point where throughput starts to fall, the bottleneck has been hit. Performance continues to degrade from this point onward. An example is shown in below snapshot.
2) Measuring Response Time across User Loads ( Response Time vs. User Load )
When you measure response time with varying number of users, watch for a sharp rise in response time. This rise is the point of poor efficiency, and performance only degrades from this point onward, as shown in below snapshot.
3) Measuring Resource Utilization across User Loads ( Resource Utilization vs. User Load )
Analyze resource utilization levels across linearly increasing user loads. See whether the resource utilization levels increase at a sharper rate as new users are added and more transactions are performed, as shown in below snapshot.
Sample application load test observations based on above mentioned measurements: | |||
Sl No. | Description | Reference | |
1 | If >=1300 Concurrent Users tries to access the application then the “Request timed out” error occurs and Load Test gets failed: |
||
2 | CPU Utilization: When User load equals to 1200 concurrent users, Web server starts utilizing more processor time. | Check the Resource Utilization vs. User Load snapshot. | |
3 | Throughput: When user load equals to 1300 concurrent users, Throughput (Request/Sec) would not increase with more user load. i.e. Throughput performance starts decreasing as early as 950 concurrent users. | Check Throughput vs. User load snapshot. | |
4 | Response Time: When user load equals to 1300 concurrent users, User starts experiencing slow response of web pages. | Check Response Time vs. User load snapshot. | |
5 | Failed Tests: User would start experiencing test failures as early as 1200 user load. |
Load Testing made ease with Visual Studio. Helping professional testers with how to analyze performance results.
Please do subscribe to our blogs to get more updates like this. Share Knowledge.
Happy Load Testing……