Thursday, November 20, 2008

Measuring Performance of Stored Procedures

Database developers need to write stored procedures which are not only fully functional, but also which perform acceptably. As database servers use permanent storage media heavily (mainly because of ACID properties), which are known for slow performance, optimizing the stored procedures for performance is very important. In here I’ll describe on some of the counters used to measure performance and analyses methods of capturing these counters. This post is intended for database developers who write stored procedures and optimize for performance.
Currently there are three counters widely used for measuring performance of the system.
  • Execution time
  • CPU Cost
  • IO Cost

Execution Time
The most primitive method is to get the time taken to execute the query from SQL Server Management Studio (SSMS). The status bar displays the time taken in terms of hour, minute and second. This may be a measure used when a query takes a longer time (Usually more than 10 seconds, so that a 10% improvement to the query could be measured) and the time difference in sub seconds is insignificant. When a query executes within a second, SSMS rounds the value. Thus this value could not be used to get the value if the query executes within sub seconds.Another method is to get the system time before and after the execution of the stored procedure and analyze the difference. SQL Server management studio could be used for this purpose, simply by adding print statements before and after the stored procedure.


PRINT CONVERT(varchar, GETDATE(), 114)


The developer could also use variables to hold the values and calculate the time difference.The advantage over this method is it could be used within a query too. For a stored procedure to be executed multiple times with different parameter for each time, print statement could be injected between each execution to analyze the time difference. For a multi statement procedure, Developers need to modify the stored procedure adding print statement, but it gives better control over the data to the developers. On the disadvantages side, one of the major points is its disability over getting time taken for compilation. Instead of PRINT statements, the time could be inserted into a table for further analysis is required. As datetime data type as it allows the value to be accurate up to 3 milliseconds the results may have vary with actual up to 3 milliseconds, and better than the previous method.

SQL Server has some other methods too.

SQL Server has a Set option which could be used to display the time taken:


SET STATISTICS TIME ON

SET STATISTICS TIME OFF

When this set option is ON. SQL Server will return message which may look similar to:

SQL Server Execution Times:

CPU time = 109 ms, elapsed time = 164 ms.

Elapsed time is the execution time of the query.

However, there is another time involved in this query: The time taken to compile the query. To view that, the statement should be issued when SET STATISTICS TIME ON Statement is already executed. When a batch of statements is submitted, SQL Server goes through ALL of them but compiles one by one. As GO is considered as batch separator, inserting a GO statement between the SET command and the query will make SQL Server to consider each statement as a batch and compile and execute them separately. Executing the set command first and executing the query (i.e. in two batches) will also do the trick.

When done the messages may be different.

SQL Server parse and compile time:

CPU time = 62 ms, elapsed time = 72 ms.

(1139 row(s) affected)

SQL Server Execution Times:

CPU time = 109 ms, elapsed time = 167 ms.

In this case, the time taken by the server for execution is 167 milliseconds. In addition to this, 72 milliseconds have been taken for compilation. The unit of measurement is in milliseconds and smaller figures will be rounded. When a multi statement stored procedure is called, the total time may slightly differ from the calculation. For this reason, SQL Server provides a summary too. Additionally, some of the internal operations like creating worktables, statistics on temporary tables etc. may create additional load but not be captured as individual statements. However, they will be added to the final cost.

I'll explain the other topics in my next post.

No comments:

Post a Comment

MS CRM 2011 KB Article customization Issue.

Recently I have encountered some issue while customizing Kb Article Entity. I was doing following configuration in Article form. 1. Add Ba...