SQL Server 2005 性能故障白皮书

国际太阳娱乐网站2138 1


Troubleshooting Performance Problems in SQL Server 2005

Executing the query “ALTER INDEX
[IX_liveConfigState_Service_ServiceId_…” failed with the
following error: “The index
“IX_liveConfigState_Service_ServiceId_GroupRightsVersion” on table
“liveConfigState_Service” cannot be reorganized because page level
locking is disabled.”. Possible failure reasons: Problems with the
query, “ResultSet” property not set correctly, parameters not set
correctly, or connection not established correctly.

Published: October 1, 2005


Writers: Sunil Agarwal, Boris Baryshnikov, Tom Davidson, Keith
Elmore, Denzil Ribeiro, Juergen Thomas


Applies To: SQL Server 2005

Query the indexes and tables list with follow query, then enable
their(index) page lock setting from property setting dialog.

Summary: It is not uncommon to experience the occasional slow down
of a SQL Server database. A poorly designed database or a system that is
improperly configured for the workload are but several of many possible
causes of this type of performance problem. Administrators need to
proactively prevent or minimize problems and, when they occur, diagnose
the cause and take corrective actions to fix the problem. This paper
provides step-by-step guidelines for diagnosing and troubleshooting
common performance problems by using publicly available tools such as
SQL Server Profiler, System Monitor, and the new Dynamic Management
Views in SQL Server 2005.


On This Page

Description .aspx#EBAA)
B: Blocking

SELECT OBJECT_NAME(i.object_id) as TableName ,


Many customers can experience an occasional slow down of their
SQL Server database. The reasons can range from a poorly designed
database to a system that is improperly configured for the workload. As
an administrator, you want to proactively prevent or minimize problems
and, when they occur, diagnose the cause and, when possible, take
corrective actions to fix the problem. This white paper limits its scope
to the problems commonly seen by Customer Support Services (CSS or PSS)
at Microsoft® Corporation since an exhaustive analysis of all possible
problems is not feasible. We provide step-by-step guidelines for
diagnosing and troubleshooting common performance problems by using
publicly available tools such as SQL Server Profiler, System Monitor
(Perfmon), and the new Dynamic Management Views in Microsoft
SQL Server™ 2005.


name as IndexName ,


The primary goal of this paper is to provide a general methodology for
diagnosing and troubleshooting SQL Server performance problems in common
customer scenarios by using publicly available tools.

SQL Server 2005 has made great strides in supportability. The kernel
layer (SQL-OS) has been re-architected and internal structures and
statistical data are exposed as relational rowsets through dynamic
management views (DMVs). SQL Server 2000 exposes some of this
information though system tables such as sysprocesses, but sometimes
you need to generate a physical dump of the SQL Server process memory to
extract relevant information from internal structures. There are two
main issues with this. First, customers cannot always provide the
physical dump due to the size of the dump and the time it takes to
create it. Second, it can take longer to diagnose the problem because
the files must generally be transmitted to Microsoft Corporation for

This brings us to the secondary goal of this paper, which is to showcase
DMVs. DMVs can expedite the diagnosis process by eliminating the need to
generate and analyze physical dumps in most cases. This paper provides,
when possible, a side-by-side comparison of troubleshooting the same
problem in SQL Server 2000 and in SQL Server 2005. DMVs provide a
simplified and familiar relational interface for getting critical system
information. This information can be used for monitoring purposes to
alert administrators to any potential problems. Or, the information can
be polled and collected periodically for detailed analysis later.




There can be many reasons for a slowdown in SQL Server. We use the
following three key symptoms to start diagnosing problems.

  • Resource bottlenecks: CPU, memory, and I/O bottlenecks are
    covered in this paper. We do not consider network issues. For each
    resource bottleneck, we describe how to identify the problem and
    then iterate through the possible causes. For example, a memory
    bottleneck can lead to excessive paging that ultimately impacts

  • Tempdb bottlenecks: Since there is only one tempdb for each
    SQL Server instance, this can be a performance and a disk space
    bottleneck. A misbehaving application can overload tempdb both
    in terms of excessive DDL/DML operations and in space. This can
    cause unrelated applications running on the server to slow down or

  • A slow running user query: The performance of an existing query
    may regress or a new query may appear to be taking longer than
    expected. There can be many reasons for this. For example:

    • Changes in statistical information can lead to a poor query plan
      for an existing query.

    • Missing indexes can force table scans and slow down the query.

    • An application can slow down due to blocking even if resource
      utilization is normal.

    Excessive blocking, for example, can be due to poor application or
    schema design or choosing an improper isolation level for the

The causes of these symptoms are not necessarily independent of each
other. The poor choice of a query plan can tax system resources and
cause an overall slowdown of the workload. So, if a large table is
missing a useful index, or the query optimizer decides not to use it,
this not only causes the query to slow down but it also puts heavy
pressure on the I/O subsystem to read the unnecessary data pages and on
the memory (buffer pool) to store these pages in the cache. Similarly,
excessive recompilation of a frequently running query can put pressure
on the CPU.


FROM sys.indexes as i

Resource Bottlenecks

The next sections of this paper discuss the CPU, memory, and I/O
subsystem resources and how these can become bottlenecks. (Network
issues are outside of the scope of this paper.) For each resource
bottleneck, we describe how to identify the problem and then iterate
through the possible causes. For example, a memory bottleneck can lead
to excessive paging, which can ultimately impact performance.

Before you can determine if you have a resource bottleneck, you need to
know how resources are used under normal circumstances. You can use the
methods outlined in this paper to collect baseline information about the
use of the resource (when you are not having performance problems).

You might find that the problem is a resource that is running near
capacity and that SQL Server cannot support the workload in its current
configuration. To address this issue, you may need to add more
processing power, memory, or increase the bandwidth of your I/O or
network channel. But, before you take that step, it is useful to
understand some common causes of resource bottlenecks. There are
solutions that do not require adding additional resources as, for
example, reconfiguration.


Tools for resolving resource bottlenecks

One or more of the following tools are used to resolve a particular
resource bottleneck.

  • System Monitor (PerfMon): This tool is available as part of
    Windows. For more information, please see the System Monitor

  • SQL Server Profiler: See SQL Server Profiler in
    the Performance Tools group in the SQL Server 2005 program

  • DBCC commands: See SQL Server Books Online and Appendix A for

  • DMVs: See SQL Server Books Online for details.



CPU Bottlenecks

A CPU bottleneck that happens suddenly and unexpectedly, without
additional load on the server, is commonly caused by a nonoptimal query
plan, a poor configuration, or design factors, and not insufficient
hardware resources. Before rushing out to buy faster and/or more
processors, you should first identify the largest consumers of CPU
bandwidth and see if they can be tuned.

System Monitor is generally the best means to determine if the server is
CPU bound. You should look to see if the Processor:% Processor
 counter is high; values in excess of 80% processor time per CPU
are generally deemed to be a bottleneck. You can also monitor the
SQL Server schedulers using the sys.dm_os_schedulers view to see if
the number of runnable tasks is typically nonzero. A nonzero value
indicates that tasks have to wait for their time slice to run; high
values for this counter are a symptom of a CPU bottleneck. You can use
the following query to list all the schedulers and look at the number of
runnable tasks.

    scheduler_id < 255

The following query gives you a high-level view of which currently
cached batches or procedures are using the most CPU. The query
aggregates the CPU consumed by all statements with the same
plan__handle (meaning that they are part of the same batch or
procedure). If a given plan_handle has more than one statement, you may
have to drill in further to find the specific query that is the largest
contributor to the overall CPU usage.

select top 50  
    sum(qs.total_worker_time) as total_cpu_time,  
    sum(qs.execution_count) as total_execution_count, 
    count(*) as  number_of_statements,  
    sys.dm_exec_query_stats qs 
group by qs.plan_handle 
order by sum(qs.total_worker_time) desc

The remainder of this section discusses some common CPU-intensive
operations that can occur with SQL Server, as well as efficient methods
to detect and resolve these problems.

国际太阳娱乐网站2138 1

Excessive compilation and recompilation

When a batch or remote procedure call (RPC) is submitted to SQL Server,
before it begins executing the server checks for the validity and
correctness of the query plan. If one of these checks fails, the batch
may have to be compiled again to produce a different query plan. Such
compilations are known as recompilations. These recompilations are
generally necessary to ensure correctness and are often performed when
the server determines that there could be a more optimal query plan due
to changes in underlying data. Compilations by nature are CPU intensive
and hence excessive recompilations could result in a CPU-bound
performance problem on the system.

In SQL Server 2000, when SQL Server recompiles a stored procedure, the
entire stored procedure is recompiled, not just the statement that
triggered the recompile. SQL Server 2005 introduces statement-level
recompilation of stored procedures. When SQL Server 2005 recompiles
stored procedures, only the statement that caused the recompilation is
compiled—not the entire procedure. This uses less CPU bandwidth and
results in less contention on lock resources such as COMPILE locks.
Recompilation can happen due to various reasons, such as:

  • Schema changed

  • Statistics changed

  • Deferred compile

  • SET option changed

  • Temporary table changed

  • Stored procedure created with the RECOMPILE query hint or which uses

 reference page:


You can use System Monitor (PerfMon) or SQL Trace (SQL Server Profiler)
to detect excessive compiles and recompiles.

System Monitor (Perfmon)

The SQL Statistics object provides counters to monitor compilation
and the type of requests that are sent to an instance of SQL Server. You
must monitor the number of query compilations and recompilations in
conjunction with the number of batches received to find out if the
compiles are contributing to high CPU use. Ideally, the ratio of SQL
 to Batch Requests/sec should be very low unless
users are submitting ad hoc queries.

The key data counters to look are as follows.

  • SQL Server: SQL Statistics: Batch Requests/sec

  • SQL Server: SQL Statistics: SQL Compilations/sec

  • SQL Server: SQL Statistics: SQL Recompilations/sec

For more information, see “SQL Statistics Object” in SQL Server Books

SQL Trace

If the PerfMon counters indicate a high number of recompiles, the
recompiles could be contributing to the high CPU consumed by SQL Server.
We would then need to look at the profiler trace to find the stored
procedures that were being recompiled. The SQL Server Profiler trace
gives us that information along with the reason for the recompilation.
You can use the following events to get this information.

SP:Recompile / SQL:StmtRecompile. The SP:Recompile and
the SQL:StmtRecompile event classes indicate which stored procedures
and statements have been recompiled. When you compile a stored
procedure, one event is generated for the stored procedure and one for
each statement that is compiled. However, when a stored procedure
recompiles, only the statement that caused the recompilation is
recompiled (not the entire stored procedure as in SQL Server 2000). Some
of the more important data columns for the SP:Recompile event class
are listed below. The EventSubClass data column in particular is
important for determining the reason for the
recompile. SP:Recompile is triggered once for the procedure or
trigger that is recompiled and is not fired for an ad hoc batch that
could likely be recompiled. In SQL Server 2005, it is more useful to
monitor SQL:StmtRecompiles as this event class is fired when any
type of batch, ad hoc, stored procedure, or trigger is recompiled.

The key data columns we look at in these events are as follows.

  • EventClass

  • EventSubClass

  • ObjectID (represents stored procedure that contains this statement)

  • SPID

  • StartTime

  • SqlHandle

  • TextData

For more information, see “SQL:StmtRecompile Event Class” in SQL Server
Books Online.

If you have a trace file saved, you can use the following query to see
all the recompile events that were captured in the trace.

    fn_trace_gettable ( 'e:recompiletrace.trc' , 1) 
    EventClass in(37,75,166)

EventClass  37 = Sp:Recompile, 75 = CursorRecompile,

You could further group the results from this query by the SqlHandle and
ObjectID columns, or by various other columns, in order to see if most
of the recompiles are attributed by one stored procedure or are due to
some other reason (such as a SET option that has changed).

Showplan XML For Query Compile. The Showplan XML For Query
 event class occurs when Microsoft SQL Server compiles or
recompiles a SQL statement. This event has information about the
statement that is being compiled or recompiled. This information
includes the query plan and the object ID of the procedure in question.
Capturing this event has significant performance overhead, as it is
captured for each compilation or recompilation. If you see a high value
for the SQL Compilations/sec counter in System Monitor, you should
monitor this event. With this information, you can see which statements
are frequently recompiled. You can use this information to change the
parameters of those statements. This should reduce the number of

DMVs. When you use the sys.dm_exec_query_optimizer_info DMV,
you can get a good idea of the time SQL Server spends optimizing. If you
take two snapshots of this DMV, you can get a good feel for the time
that is spent optimizing in the given time period.

select *  
from sys.dm_exec_query_optimizer_info 

counter          occurrence           value                 
---------------- -------------------- ---------------------  
optimizations    81                   1.0 
elapsed time     81                   6.4547820702944486E-2

In particular, look at the elapsed time, which is the time elapsed due
to optimizations. Since the elapsed time during optimization is
generally close to the CPU time that is used for the optimization (since
the optimization process is very CPU bound), you can get a good measure
of the extent to which the compile time is contributing to the high CPU

Another DMV that is useful for capturing this information
is sys.dm_exec_query_stats.

The data columns that you want to look at are as follows. :

  • Sql_handle

  • Total worker time

  • Plan generation number

  • Statement Start Offset

For more information, see the SQL Server Books Online topic
on sys.dm_exec_query_stats.

In particular, plan_generation_num indicates the number of times the
query has recompiled. The following sample query gives you the top
25 stored procedures that have been recompiled.

select *  
from sys.dm_exec_query_optimizer_info 

select top 25 
    sys.dm_exec_query_stats a 
    cross apply sys.dm_exec_sql_text(sql_handle) as sql_text 
    plan_generation_num >1 
order by plan_generation_num desc

For additional information, see Batch Compilation, Recompilation, and
Plan Caching Issues in SQL Server
2005 ()
on Microsoft TechNet.


If you have detected excessive compilation/recompilation, consider the
following options.

  • If the recompile occurred because a SET option changed, use SQL
    Server Profiler to determine which SET option changed. Avoid
    changing SET options within stored procedures. It is better to set
    them at the connection level. Ensure that SET options are not
    changed during the lifetime of the connection.

  • Recompilation thresholds for temporary tables are lower than for
    normal tables. If the recompiles on a temporary table are due to
    statistics changes, you can change the temporary tables to table
    variables. A change in the cardinality of a table variable does not
    cause a recompilation. The drawback of this approach is that the
    query optimizer does not keep track of a table variable’s
    cardinality because statistics are not created or maintained on
    table variables. This can result in nonoptimal query plans. You can
    test the different options and choose the best one.

    Another option is to use the KEEP PLAN query hint. This sets the
    threshold of temporary tables to be the same as that of permanent
    tables. The EventSubclass column indicates that “Statistics
    Changed” for an operation on a temporary table.

  • To avoid recompilations that are due to changes in statistics (for
    example, when the plan becomes suboptimal due to change in the data
    statistics), specify the KEEPFIXED PLAN query hint. With this option
    in effect, recompilations can only happen because of
    correctness-related reasons (for example, when the underlying table
    structure has changed and the plan no longer applies) and not due to
    statistics. An example might be when a recompilation occurs if the
    schema of a table that is referenced by a statement changes, or if a
    table is marked with the sp_recompile stored procedure.

  • Turning off the automatic updates of statistics for indexes and
    statistics that are defined on a table or indexed view prevents
    recompiles that are due to statistics changes on that object. Note,
    however, that turning off the “auto-stats” feature by using this
    method is usually not a good idea. This is because the query
    optimizer is no longer sensitive to data changes in those objects
    and suboptimal query plans might result. Use this method only as a
    last resort after exhausting all other alternatives.

  • Batches should have qualified object names (for example, dbo.Table1)
    to avoid recompilation and to avoid ambiguity between objects.

  • To avoid recompiles that are due to deferred compiles, do not
    interleave DML and DDL or create the DDL from conditional constructs
    such as IF statements.

  • Run Database Engine Tuning Advisor (DTA) to see if any indexing
    changes improve the compile time and the execution time of the

  • Check to see if the stored procedure was created with the WITH
    RECOMPILE option or if the RECOMPILE query hint was used. If a
    procedure was created with the WITH RECOMPILE option, in
    SQL Server 2005, we may be able to take advantage of the statement
    level RECOMPILE hint if a particular statement within that procedure
    needs to be recompiled. This would avoid the necessity of
    recompiling the whole procedure each time it executes, while at the
    same time allowing the individual statement to be compiled. For more
    information on the RECOMPILE hint, see SQL Server Books Online.

Inefficient query plan

When generating an execution plan for a query, the SQL Server optimizer
attempts to choose a plan that provides the fastest response time for
that query. Note that the fastest response time doesn’t necessarily mean
minimizing the amount of I/O that is used, nor does it necessarily mean
using the least amount of CPU—it is a balance of the various resources.

Certain types of operators are more CPU intensive than others. By their
nature, the Hash operator and Sort operator scan through their
respective input data. With read ahead (prefetch) being used during such
a scan, the pages are almost always available in the buffer cache before
the page is needed by the operator. Thus, waits for physical I/O are
minimized or eliminated. When these types of operations are no longer
constrained by physical I/O, they tend to manifest themselves by high
CPU consumption. By contrast, nested loop joins have many index lookups
and can quickly become I/O bound if the index lookups are traversing to
many different parts of the table so that the pages can’t fit into the
buffer cache.

The most significant input the optimizer uses in evaluating the cost of
various alternative query plans is the cardinality estimates for each
operator, which you can see in the Showplan
(EstimateRows and EstimateExecutions attributes). Without
accurate cardinality estimates, the primary input used in optimization
is flawed, and many times so is the final plan.

For an excellent white paper that describes in detail how the SQL Server
optimizer uses statistics, see Statistics Used by the Query Optimizer
in Microsoft
SQL Server 2005().
The white paper discusses how the optimizer uses statistics, best
practices for maintaining up-to-date statistics, and some common query
design issues that can prevent accurate estimate cardinality and thus
cause inefficient query plans.


Inefficient query plans are usually detected comparatively. An
inefficient query plan may cause increased CPU consumption.

The query against sys.dm_exec_query_stats is an efficient way to
determine which query is using the most cumulative CPU.

    (select top 50  
        sys.dm_exec_query_stats qs 
    order by qs.total_worker_time desc) as highest_cpu_queries 
    cross apply sys.dm_exec_sql_text(plan_handle) as q 
order by highest_cpu_queries.total_worker_time desc

Alternatively, query against sys.dm_exec_cached_plans by using
filters for various operators that may be CPU intensive, such as ‘%Hash
Match%’, ‘%Sort%’ to look for suspects.


Consider the following options if you have detected inefficient query

  • Tune the query with the Database Engine Tuning Advisor to see if it
    produces any index recommendations.

  • Check for issues with bad cardinality estimates.

    Are the queries written so that they use the most restrictive WHERE
    clause that is applicable? Unrestricted queries are resource
    intensive by their very nature.

    Run UPDATE STATISTICS on the tables involved in the query and check
    to see if the problem persists.

    Does the query use constructs for which the optimizer is unable to
    accurately estimate cardinality? Consider whether the query can be
    modified in a way so that the issue can be avoided.

  • If it is not possible to modify the schema or the query, SQL
    Server 2005 has a new plan guide feature that allows you to specify
    query hints to add to queries that match certain text. This can be
    done for ad hoc queries as well as inside a stored procedure. Hints
    such as OPTION (OPTIMIZE FOR) allow you to impact the cardinality
    estimates while leaving the optimizer its full array of potential
    plans. Other hints such as OPTION (FORCE ORDER) or OPTION (USE PLAN)
    allow you varying degrees of control over the query plan.

Intra-query parallelism

When generating an execution plan for a query, the SQL Server optimizer
attempts to choose the plan that provides the fastest response time for
that query. If the query’s cost exceeds the value specified in
the cost threshold for parallelism option and parallelism has not
been disabled, then the optimizer attempts to generate a plan that can
be run in parallel. A parallel query plan uses multiple threads to
process the query, with each thread distributed across the available
CPUs and concurrently utilizing CPU time from each processor. The
maximum degree of parallelism can be limited server wide using the max
degree of parallelism
 option or on a per-query level using the OPTION
(MAXDOP) hint.

The decision on the actual degree of parallelism (DOP) used for
execution—a measure of how many threads will do a given operation in
parallel—is deferred until execution time. Before executing the query,
SQL Server  2005 determines how many schedulers are under-utilized and
chooses a DOP for the query that fully utilizes the remaining
schedulers. Once a DOP is chosen, the query runs with the chosen degree
of parallelism until completion. A parallel query typically uses a
similar but slightly higher amount of CPU time as compared to the
corresponding serial execution plan, but it does so in a shorter
duration of elapsed time. As long as there are no other bottlenecks,
such as waits for physical I/O, parallel plans generally should use 100%
of the CPU across all of the processors.

One key factor (how idle the system is) that led to running a parallel
plan can change after the query starts executing. This can change,
however, after the query starts executing. For example, if a query comes
in during an idle time, the server may choose to run with a parallel
plan and use a DOP of four and spawn up threads on four different
processors. Once those threads start executing, existing connections may
submit other queries that also require a lot of CPU. At that point, all
the different threads will share short time slices of the available CPU,
resulting in higher query duration.

Running with a parallel plan is not inherently bad and should provide
the fastest response time for that query. However, the response time for
a given query must be weighed against the overall throughput and
responsiveness of the rest of the queries on the system. Parallel
queries are generally best suited to batch processing and decision
support workloads and might not be desirable in a transaction processing


Intra-query parallelism problems can be detected using the following

System Monitor (Perfmon)

Look at the SQL Server:SQL Statistics – Batch Requests/sec counter
and see “SQL Statistics Object” in SQL Server Books Online for more

Because a query must have an estimated cost that exceeds the cost
threshold for the parallelism configuration setting (which defaults to
5) before it is considered for a parallel plan, the more batches a
server is processing per second the less likely it is that the batches
are running with parallel plans. Servers that are running many parallel
queries normally have small batch requests per second (for example,
values less than 100).


From a running server, you can determine whether any active requests are
running in parallel for a given session by using the following query.

    max(isnull(exec_context_id, 0)) as number_of_workers, 
    sys.dm_exec_requests r 
    join sys.dm_os_tasks t on r.session_id = t.session_id 
    join sys.dm_exec_sessions s on r.session_id = s.session_id 
    s.is_user_process = 0x1 
group by  
    r.session_id, r.request_id,  
    r.sql_handle, r.plan_handle,  
    r.statement_start_offset, r.statement_end_offset 
having max(isnull(exec_context_id, 0)) > 0

With this information, the text of the query can easily be retrieved by
using sys.dm_exec_sql_text, while the plan can be retrieved
using sys.dm_exec_cached_plan.

国际太阳娱乐网站2138,You may also search for plans that are eligible to run in parallel. This
can be done by searching the cached plans to see if a relational
operator has its Parallel attribute as a nonzero value. These plans
may not run in parallel, but they are eligible to do so if the system is
not too busy.

-- Find query plans that may run in parallel 
    sys.dm_exec_cached_plans cp 
    cross apply sys.dm_exec_query_plan(cp.plan_handle) p 
    cross apply sys.dm_exec_sql_text(cp.plan_handle) as q 
    cp.cacheobjtype = 'Compiled Plan' and 
    p.query_plan.value('declare namespace  
        max(//p:RelOp/@Parallel)', 'float') > 0

In general, the duration of a query is longer than the amount of CPU
time, because some of the time was spent waiting on resources such as a
lock or physical I/O. The only scenario where a query can use more CPU
time than the elapsed duration is when the query runs with a parallel
plan such that multiple threads are concurrently using CPU. Note that
not all parallel queries will demonstrate this behavior (CPU time
greater than the duration).

Note: Some parts of the code snippet presented in the following
table have been displayed in multiple lines only for better readability.
These should be entered in a single line.

    sys.dm_exec_query_stats qs 
    cross apply sys.dm_exec_sql_text(qs.plan_handle) as q 
    qs.total_worker_time > qs.total_elapsed_time 
SQL Trace 
Look for the following signs of parallel queries,  
which could be either statements or batches that have 
CPU time greater than the duration. 
    EventClass in (10, 12)    -- RPC:Completed,  
    and CPU > Duration/1000    -- CPU is in  
milliseconds, Duration in microseconds Or can be  
Showplans (un-encoded) that have Parallelism operators] 
in them 
    TextData LIKE '%Parallelism%'

Any query that runs with a parallel plan is one that the optimizer
believes is expensive enough that it would exceed the cost threshold of
parallelism, which defaults to five (roughly 5-second execution time on
a reference machine). Any queries identified through the methods above
are candidates for further tuning.

  • Use the Database Engine Tuning Advisor to see if any indexing
    changes, changes to indexed views, or partitioning changes could
    reduce the cost of the query.

  • Check for significant differences in the actual versus the estimated
    cardinality since the cardinality estimates are the primary factor
    in estimating the cost of the query. If any significant differences
    are found:

    If the auto create statistics database option is disabled, make
    sure that there are no MISSING STATS entries in
    the Warnings column of the Showplan output.

    Try running UPDATE STATISTICS on the tables where the cardinality
    estimates are off.

    Verify that the query doesn’t use a query construct that the
    optimizer can’t accurately estimate, such as multi-statement
    table-valued functions or CLR functions, table variables, or
    comparisons with a Transact-SQL variable (comparisons with a
    parameter are OK).

  • Evaluate whether the query could be written in a more efficient
    fashion using different Transact-SQL statements or expressions.

Poor cursor usage

Versions of SQL Server prior to SQL Server 2005 only supported a single
active common per connection. A query that was executing or had results
pending to send to the client was considered active. In some situations,
the client application might need to read through the results and submit
other queries to SQL Server based on the row just read from the result
set. This could not be done with a default result set, since it could
have other pending results. A common solution was to change the
connection properties to use a server-side cursor.

When using a server-side cursor, the database client software (the OLE
DB provider or ODBC driver) transparently encapsulates client requests
inside of special extended stored procedures, such
as sp_cursoropensp_cursorfetch, and so forth. This is
referred to as an API cursor (as opposed to a TSQL cursor). When the
user executes the query, the query text is sent to the server
via sp_cursoropen, requests to read from the result set would
result in an sp_cursorfetch instructing the server to only send
back a certain number of rows. By controlling the number of rows that
are fetched, it is possible for the ODBC driver or OLE DB provider to
cache the row(s). This prevents a situation where the server is waiting
for the client to read all the rows it has sent. Thus, the server is
ready to accept a new request on that connection.

Applications that open cursors and fetch one row (or a small number of
rows) at a time can easily become bottlenecked by the network latency,
especially on a wide area network (WAN). On a fast network with many
different user connections, the overhead required to process many cursor
requests may become significant. Because of the overhead associated with
repositioning the cursor to the appropriate location in the result set,
per-request processing overhead, and similar processing, it is more
efficient for the server to process a single request that returns
100 rows than to process 100 separate requests which return the same
100 rows but one row at a time.


You can use the following methods to troubleshoot poor cursor usage.

System Monitor (Perfmon)

By looking at the SQL Server:Cursor Manager By Type – Cursor
 counter, you can get a general feel for how many cursors
are being used on the system by looking at this performance counter.
Systems that have high CPU utilization because of small fetch sizes
typically have hundreds of cursor requests per second. There are no
specific counters to tell you about the fetch buffer size.


The following query can be used to determine the connections with API
cursors (as opposed to TSQL cursors) that are using a fetch buffer size
of one row. It is much more efficient to use a larger fetch buffer, such
as 100 rows.

Note: Some parts of the code snippet presented in the following
table have been displayed in multiple lines only for better readability.
These should be entered in a single line.

    sys.dm_exec_connections con 
    cross apply sys.dm_exec_cursors(con.session_id) as cur 
    cur.fetch_buffer_size = 1  
    and cur.properties LIKE 'API%'    -- API  
cursor (TSQL cursors always have fetch buffer of 1)

SQL Trace

Use a trace that includes the RPC:Completed event class search
for sp_cursorfetch statements. The value of the fourth parameter is
the number of rows returned by the fetch. The maximum number of rows
that are requested to be returned is specified as an input parameter in
the corresponding RPC:Starting event class.

  • Determine if cursors are the most appropriate means to accomplish
    the processing or whether a set-based operation, which is generally
    more efficient, is possible.

  • Consider enabling multiple active results (MARS) when connecting to
    SQL Server 2005.

  • Consult the appropriate documentation for your specific API to
    determine how to specify a larger fetch buffer size for the cursor:


    OLE DB – IRowset::GetNextRows or IRowsetLocate::GetRowsAt


Memory Bottlenecks

This section specifically addresses low memory conditions and ways to
diagnose them as well as different memory errors, possible reasons for
them, and ways to troubleshoot.


It is quite common to refer to different memory resources by using the
single generic term memory. As there are several types of memory
resources, it is important to understand and differentiate which
particular memory resource is referred to.

Virtual address space and physical memory

In Microsoft Windows®, each process has its own virtual address space
(VAS). The set of all virtual addresses available for process use
constitutes the size of the VAS. The size of the VAS depends on the
architecture (32- or 64-bit) and the operating system. In the context of
troubleshooting, it is important to understand that virtual address
space is a consumable memory resource and an application can run out of
it even on a 64-bit platform while physical memory may still be

For more information about virtual address space, see “Process Address
Space” in SQL Server Books Online and the article called Virtual
Space ()
on MSDN.

Address Windowing Extensions (AWE) and SQL Server

Address Windowing Extensions (AWE) is an API that allows a 32-bit
application to manipulate physical memory beyond the inherent 32-bit
address limit. AWE mechanism technically is not necessary on 64-bit
platform. It is, however, present there. Memory pages that are allocated
through the AWE mechanism are referred as locked pages on the 64-bit

On both 32- and 64-bit platforms, memory that is allocated through the
AWE mechanism cannot be paged out. This can be beneficial to the
application. (This is one of the reasons for using AWE mechanism on
64-bit platform.) This also affects the amount of RAM that is available
to the system and to other applications, which may have detrimental
effects. For this reason, in order to use AWE, the Lock Pages in
 privilege must be enabled for the account that runs SQL Server.

From a troubleshooting perspective, an important point is that the
SQL Server buffer pool uses AWE mapped memory; however, only database
(hashed) pages can take full advantage of memory allocated through AWE.
Memory allocated through the AWE mechanism is not reported by Task
Manager or in the Process: Private Bytes performance counter. You
need to use SQL Server specific counters or Dynamic Management Views to
obtain this information.

For more information about AWE mapped memory, see “Managing memory for
large databases” and “Memory Architecture” in SQL Server Books
Online  topics and Large Memory
Support ()
on MSDN.

The following table summarizes the maximum memory support options for
different configurations of SQL Server 2005. (Note that a particular
edition of SQL Server or Windows may put more restrictive limits on the
amount of supported memory.)

Table 1



Max physical memory

AWE/locked pages support

Native 32-bit on 32-bit OS

with /3GB boot parameter1

2 GB

3 GB

64 GB

16 GB



32-bit on x64 OS (WOW)

4 GB

64 GB


32-bit on IA64 OS (WOW)

2 GB

2 GB


Native 64-bit on x64 OS

8 terabyte

1 terabyte


Native 64-bit on IA64 OS

7 terabyte

1 terabyte


Memory pressures

Memory pressure denotes a condition when limited amount of memory is
available. Identifying when SQL Server runs under a memory pressure will
help you troubleshoot memory-related issues. SQL Server responds
differently depending on the type of memory pressure that is present.
The following table summarizes the types of memory pressures, and their
general underlying causes. In all cases, you are more likely to see
timeout or explicit out-of-memory error messages.

Table 2





Physical memory (RAM) running low. This causes the system to trim working sets of currently running processes, which may result in overall slowdown.

SQL Server detects this condition and, depending on the configuration, may reduce the commit target of the buffer pool and start clearing internal caches.

SQL Server detects high memory consumption internally, causing redistribution of memory between internal components.

Internal memory pressure may be a result of:

  • Responding to the external memory pressure (SQL Server sets lower memory usage caps).

  • Changed memory settings (e.g. ‘max server memory’).

  • Changes in memory distribution of internal components (due to high percentage of reserved and stolen pages from the buffer pool).


Running low on space in the system page file(s). This may cause the system to fail memory allocations, as it is unable to page out currently allocated memory. This condition may result in the whole system responding very slowly or even bring it to a halt.

Running low on VAS due to fragmentation (a lot of VAS is available but in small blocks) and/or consumption (direct allocations, DLLs loaded in SQL Server VAS, high number of threads).

SQL Server detects this condition and may release reserved regions of VAS, reduce buffer pool commit target, and start shrinking caches.

Windows has a notification mechanism2 if physical memory is running
high or low. SQL Server uses this mechanism in its memory management

General troubleshooting steps in each case are explained in Table 3.

Table 3





  • Find major system memory consumers.

  • Attempt to eliminate (if possible).

  • Check for adequate system RAM and consider adding more RAM (usually requires more careful investigation beyond the scope of this paper).

  • Identify major memory consumers inside SQL Server.

  • Verify server configuration.

  • Further actions depend on the investigation: check for workload; possible design issues; other resource bottlenecks.


  • Increase swap file size.

  • Check for major physical memory consumers and follow steps of external physical memory pressure.

  • Follow steps of internal physical memory pressure.


The following tools and sources of information could be used for

  • Memory related DMVs


  • Performance counters: performance monitor or DMV for SQL Server
    specific object

  • Task Manager

  • Event viewer: application log, system log

Detecting memory pressures

Memory pressure by itself does not indicate a problem. Memory pressure
is a necessary but not a sufficient condition for the server to
encounter memory errors later on. Working under memory pressure could be
a normal operating condition of the server. However, signs of memory
pressure may indicate that the server runs close to its capacity and the
potential for out-of-memory errors exists. In the case of normally
operating server, this information could serve as a baseline for
determining reasons for out-of-memory conditions later.

You can leave a response, or trackback from your own site.

Leave a Reply