Tuesday, September 25, 2012

Avoid TempDB contention

A great discussion on msdn blogs, that any DBA should read when feels not certain about tempdb configuration, it summaries what should be done, and speaks about some myths of worker writes, it is worth reading

http://blogs.msdn.com/b/psssql/archive/2007/02/21/sql-server-urban-legends-discussed.aspx

SQL Server Uses One Thread Per Data File


The Legend grew from the following SQL Server behavior -   “When SQL Server is creating a database file (.mdf, .ndf, .ldf) it writes zero’s to all bytes in the file and stamps the appropriate allocation structures.  SQL Server can use a thread per file when creating a database.”   This is a true statement but leaves room for interpretation so allow me to clarify with the following examples.
The true behavior is that for each UNIQUE disk drive specified a thread is used to initialize the database files.  The number of threads is capped by a reasonable pool limit around one half of the total worker thread limit.
create database …. name = c:\MyFile1.mdf, name=c:\MyFile1.ndf
The SQL Server would use a single worker to create this database initializing MyFile1.mdf and then MyFile2.ndf.
create database …. name = c:\MyFile1.mdf, name=D:\MyFile1.ndf
The SQL Server would use two workers to create this database one thread initializing MyFile1.mdf and the other MyFile2.ndf because different drives have been specified.   SQL Server makes the assumption that different drives have the possibility of unique I/O paths.
This information has been taken out of context and propagated incorrectly as SQL Server has a thread per database file so adding more files to a database can increase I/O performance.   Adding more files and properly aligning them with storage can increase the I/O performance for a variety of reasons but achieving new I/O threads per file is NOT one of them.  SQL Server I/O is all about I/O response time.  You can have a variety of I/O configurations as long as the response times are acceptable.
The SQL Server 2005 – Instant File Initialization – feature significantly reduces the impact of database creation because zero’s don’t have to be stamped in all bytes of a database file, only the log files.   This reduces the gain from using multiple threads during database creation.
SQL Server 2005 and Newer
Actions such as create and drop database assign a work unit number based on the volume identity.   This means each file specified in the create/drop database statement gets assigned a work unit number.  
The APIs GetVolumePathName and GetVolumeNameForMountPoint are used to obtain the identity of the volume.  Only when these APIs return an error do we revert to the drive identifier behavior, making the assumption that different drive letters are associated with unique volumes.  A background worker is created to handle each unique work unit.            
The following illustration the drives C and D are using Volume A and E used Volume B.   C and D will be assigned to Work Unit 1 and processed by Worker 1 while E will be assigned to Work Unit 2 and be processed by Worker 2.
image
The Truth


SQL Server uses asynchronous I/O allowing any worker to issue an I/O requests regardless of the number and size of the database files or what scheduler is involved.  In fact, some may remember SQL Server 6.5 and previous versions had the sp_configure option ‘max async I/O’ that was commonly set to 255 and was removed when SQL Server 7.0 released.
SQL Server controls the async I/O depth with a set of FCBIoRequest structures.   These structures hold the I/O request information, including the OVERLAPPED structure.   There is a single pool of these structures for the entire instance of SQL Server.   For SQL Server 6.5 the maximum number of these structures was controlled with the configuration option.  
Starting with SQL Server 7.0 the number of these structures is only limited by available memory.  It is actually a bit more complicated than just can a new FCBIoRequest be allocated.   SQL Server has basic throttles for lazy writer, checkpoint, read ahead and other activities.  This means that the practical limit is ((# of workers) * (action practical limit)).   For example, the practical limit for checkpoint is generically no more than 100 outstanding I/O requests a any point in time and less if the target response time is not being honored. 
Tempdb


The immediate question I get is – “What about all the KB articles on tempdb, a file for scheduler and such?”    This is again a misinterpretation of the legend as tempdb tuning is all about the allocation activity, not a number of threads per file (no such thing).  Workers use tempdb like any other database, any worker can issue I/O to and from tempdb as needed. 
Tempdb is the database with the highest level of create and drop actions and under high stress the allocation pages, syscolumns and sysobjects can become bottlenecks.   SQL Server 2005 reduces contention with the ‘cached temp table’ feature and allocation contention skip ahead actions.
When multiple workers are attempting to create or drop objects in tempdb you can decrease the bottleneck by having
·         As many files in tempdb as schedulers
·         All files sizes are equal
·         Uniform allocations enabled (-T1118)
By configuring in this way each new allocation occurs in round-robin fashion across the files and only on uniform extents.   SQL Server only allows a single worker to be active on each scheduler at any point in time so if every ‘running’ worker is attempting tempdb allocations they will logically partition to a separate file in tempdb, reducing the allocation contentions.
I/O Affinity


I/O affinity is the ability to dedicate a scheduler to the processing of I/O requests.   This muddied the water as you can imagine adding to the legend.
I/O affinity does not understand database or database specifics.  It is only a queuing mechanism for issuing and completing I/O.   A worker determines it needs to perform an I/O and does all the basic work.  The I/O affinity hook is just before the worker issues the actual ReadFile or WriteFile call.  Instead of issuing the I/O the worker puts the I/O request on a queue.   The queue is serviced by the dedicated scheduler instead.
A Disk Queue Length greater than 2 indicates an I/O bottleneck


A statement was published many years ago that sustained disk queue length greater than 2 is an indication of an I/O bottleneck.  This statement is still true if the application is not designed to handle the situation.   SQL Server is designed to push disk queue lengths above 2 when it is appropriate.
SQL Server uses async I/O to help maximize resource usage.   SQL Server understands that it can hand off an I/O request to the I/O subsystem and continue with other activity.   Let’s look an example of this.  
SQL Server checkpoint posts up to 100 I/O requests and monitors the I/O response time in order to properly throttle checkpoint impact.  When the I/O response time exceeds the target the number of I/Os is throttled.    The disk queue length can easily exceed 2 and not be an indication of a subsystem problem.  SQL Server is attempting to maximize the I/O channel.
SQL Server does read ahead or tempdb spooling actions.  The workers seldom directly wait for the I/O to complete. Instead the workers continue on with other actives taking advantage of CPU processing power instead of waiting for the I/O response.   You would not want SQL Server to read a page, add it to a sort, and write the sort page during a large sort.   Instead SQL Server will post the sort I/O in motion and return to the next read page and continue sorting.   The sort operations track the outstanding I/Os and check completion status at later intervals.
The next time someone looks at the disk counters on the system and states the disk queue length is above 2 and this is a problem take a minute to look at other data points.   The key for SQL Server is the I/O response times.  Start by looking at the Average Disk Seconds Per Transfer for the same I/O path.  Then look at the SQL Server sysprocesses and waitstats for buffer based I/O wait information.   You want to see evidence from SQL Server that the disk queue length is related to an I/O bottleneck.  Don’t just take the depth of 2 at face value.
-          Bob Dorr



Tuesday, November 1, 2011

Denali New Features



5. New Development Console

One obvious improvement is the new development environment, code-named “Juneau,” demonstrated at Quentin Clark’s PASS keynote address. Juneau uses the new Windows Presentation Foundation (WPF)-based Visual Studio shell that Visual Studio 2010 is built on. One goal for the new feature is to make the development environment consistent for both SQL Azure and on-premises SQL Server. It’s not in CTP1, but you can preview it at the Live Keynote site at PASS.

4. Columnar Index

One of the more surprising features in the Denali release is the new columnar index, also called the column-based query accelerator. I can’t help but wonder if this feature isn’t a reaction to the NoSQL movement.

The columnar index uses the same high performance/high compression technology that Microsoft uses in PowerPivot and brings it into the database engine. Indexed data is stored column-wise and only the needed columns are returned as query results for columnar indexes. Microsoft says this technology can provide up to a 10X improvement in query performance with reduced I/O—depending on your data.

3. Contained Databases

Contained databases make it easy to move databases between different instances of SQL Server. Users of a contained database don’t need logins for the SQL Server instance. Instead, the contained database handles all authentications.

Contained databases hosted on an instance of SQL Server have no configuration dependencies on that instance and are very similar in concept to the virtual databases that I discussed in “Riding the Virtualization Express: From Desktops to Databases” at InstantDoc 125267. And you can see the MSDN website for more information.

2. Support for Windows Server Core

Windows Server Core is designed for infrastructure applications like SQL Server that provide back-end services but don’t need a graphical UI on the same server. The inability to run previous versions of SQL Server on Windows Server Core always seemed wrong. Denali’s support for Windows Server will enable leaner and more efficient SQL Server installations, while also reducing the potential attack vector and the need for patching.

1. AlwaysOn

Probably the most significant new feature in SQL Server Denali is the SQL Server AlwaysOn feature, which is essentially the name for the next evolution of database mirroring. Database mirroring is very cool but had limitations: It was limited to a single mirrored partner, and the data replica contained on the mirrored site couldn’t be used while mirroring was active.

AlwaysOn, or HADR (High Availability Disaster Recovery) as it’s also called, supports up to four replicas and the data in the mirrored replicas can be queried and backups performed. At this early stage, HADR seems more complicated to set up than database mirroring, but the advantages look to make it worth the extra effort. Read more about HADR at the MSDN site.

The Microsoft® SQL Server® code name 'Denali'



The Microsoft® SQL Server® code name 'Denali'

– Community Technology Preview 3 (CTP 3) Feature Pack is a collection of stand-alone packages which provide additional value for Microsoft® SQL Server® code name 'Denali' CTP 3. It includes the latest versions of:

Tool and components for Microsoft® SQL Server® code name 'Denali CTP 3.
Add-on providers for Microsoft® SQL Server® code name 'Denali' CTP 3.

Some components in the Microsoft® SQL Server® code name 'Denali' CTP 3 Feature Pack have certain Internet-enabled features, including the Customer Experience Improvement Program and Error and Usage Reporting component, enabled by default to help us identify and address issues that you may encounter while using SQL Server.


Some components in the Microsoft® SQL Server® code name 'Denali' CTP 3 Feature Pack of SQL Server® have two different methods to complete the installation process:



Attended Installation – an interactive user interface will guide you through the set-up process and obtain all of the information and consent required to complete the installation. This includes the displaying, acknowledgement, and archiving of the applicable SQL Server license terms.



Un-attend Installation – this is a command Line set-up process that does not have an interactive user interface to guide you through the set-up process. In this case, you will be required to enter a parameter during installation that indicates your acceptance of the license terms.


http://www.microsoft.com/download/en/details.aspx?id=27069

Monday, October 31, 2011

Find Unsed Indexes




SELECT o.name,

       indexname=i.name,

       i.index_id,

       reads=user_seeks + user_scans + user_lookups,

       writes = user_updates,

       ROWS = (SELECT SUM(p.ROWS)

               FROM   sys.partitions p

               WHERE  p.index_id = s.index_id

                      AND s.object_id = p.object_id),

       CASE

         WHEN s.user_updates < 1 THEN 100

         ELSE 1.00 * ( s.user_seeks + s.user_scans + s.user_lookups ) /

              s.user_updates

       END                        AS reads_per_write,

       'DROP INDEX ' + Quotename(i.name) + ' ON ' + Quotename(c.name) + '.' +

       Quotename

       (Object_name(s.object_id)) AS 'drop statement'

FROM   sys.dm_db_index_usage_stats s

       INNER JOIN sys.indexes i

         ON i.index_id = s.index_id

            AND s.object_id = i.object_id

       INNER JOIN sys.objects o

         ON s.object_id = o.object_id

       INNER JOIN sys.schemas c

         ON o.schema_id = c.schema_id

WHERE  Objectproperty(s.object_id, 'IsUserTable') = 1

       AND s.database_id = Db_id()

       AND i.type_desc = 'nonclustered'

       AND i.is_primary_key = 0

       AND i.is_unique_constraint = 0

       AND (SELECT SUM(p.ROWS)

            FROM   sys.partitions p

            WHERE  p.index_id = s.index_id

                   AND s.object_id = p.object_id) > 10000

ORDER  BY reads 

Sunday, October 30, 2011

Find Tables Without Primary Keys




SELECT Schema_name(o.schema_id) AS [schema],

       Object_name(i.object_id) AS [table],

       p.ROWS,

       user_seeks,

       user_scans,

       user_lookups,

       user_updates,

       last_user_seek,

       last_user_scan,

       last_user_lookup

FROM   sys.indexes i

       INNER JOIN sys.objects o

         ON i.object_id = o.object_id

       INNER JOIN sys.partitions p

         ON i.object_id = p.object_id

            AND i.index_id = p.index_id

       LEFT OUTER JOIN sys.dm_db_index_usage_stats ius

         ON i.object_id = ius.object_id

            AND i.index_id = ius.index_id

WHERE  i.type_desc = 'HEAP'

ORDER  BY ROWS DESC


Thursday, October 27, 2011

Generate INSERT statments for tables


Generate Insert Statements for tables, a very handy script you will like to use.

 
CREATE PROCEDURE proc_CreateInsertForTable
  @tableName nvarchar(1000),
  @withDescription bit = 1
AS 
/*************************************************************************** 
  Written by Jennifer McCown, 9/13/2009 
  Jen@JenniferMcCown.com 
  http://www.MidnightDBA.com 
  Not copywrited; use, modify, and distribute freely! 
  
  Description: Generate an insert statement for a given table.  
  @withDescription controls the appearence of the "VALUES" portion of the 
  generated input statement.
   
  Please note that the insert statement will not include Identity columns. 

  EXEC exec proc_CreateInsertForTable @tableName = 'Video', @withDescription = 1
****************************************************************************/

SET NOCOUNT ON

DECLARE @sql1 varchar(8000), 
    @sql2 varchar(4000),
    @name varchar(500), 
    @type varchar(100)
    
DECLARE @mytable table (column_id int, 
        [name] nvarchar(500), 
        system_type_id int, 
        max_length int, 
        [precision] int, 
        scale int, 
        is_nullable bit, 
        is_identity bit, 
        readableType  varchar(100))

INSERT INTO @myTable
SELECT column_id, 
    c.[name], 
    c.system_type_id, 
    c.max_length, 
    c.[precision], 
    c.scale, 
    c.is_nullable, 
    c.is_identity, 
    t.[name] readableType
from sys.columns c
JOIN sys.types t on c.system_type_id = t.system_type_id
where object_name(c.object_id) = 'Video' order by c.column_id

-- Update ReadableType columns
UPDATE @myTable SET readableType = readableType 
    + ' (' + CAST(max_length as nvarchar(10)) + ')'
WHERE readableType IN ('char', 'varchar', 'nchar', 'nvarchar')

UPDATE @myTable SET readableType = readableType 
    + ' (' + CAST([precision] as nvarchar(10))  
    + ', ' + CAST([scale] as nvarchar(10))  + ')'
WHERE readableType IN ('decimal','numeric')

UPDATE @myTable SET readableType = readableType 
    + ' (' + CAST([precision] as nvarchar(10))  + ')'
WHERE readableType = 'float'


-- Seed the SQL variables with INSERT syntax
SET @sql1 = 'INSERT INTO ' + @tableName + ' (' 
SET @sql2 = ') ' + 'VALUES (' 


-- Open a cursor to add all the columns to INSERT statement
DECLARE insertCrs CURSOR FOR
SELECT [name], readableType from @myTable WHERE is_identity = 0

OPEN insertCrs 

-- Get first column
FETCH NEXT FROM insertCrs INTO @name, @type
SET @sql1 = @sql1 + @name
IF @withDescription = 1 
  SET @sql2 = @sql2 + ''''' /** ' + @name + ' ' + @type + ' **/ '
ELSE 
  SET @sql2 = @sql2 + ''''' '

-- Add additional columns
FETCH NEXT FROM insertCrs INTO @name, @type

  IF @withDescription = 1 
    WHILE @@FETCH_STATUS = 0
    BEGIN
      SET @sql1 = @sql1 + ', ' + @name
      SET @sql2 = @sql2 + ', '''' /** ' + @name + '**/ '
      FETCH NEXT FROM insertCrs INTO @name, @type
    END
  
  ELSE

    WHILE @@FETCH_STATUS = 0
    BEGIN
      SET @sql1 = @sql1 + ', ' + @name
      SET @sql2 = @sql2 + ', '''' '
      FETCH NEXT FROM insertCrs INTO @name, @type
    END


CLOSE insertCrs
DEALLOCATE insertCrs

SET @sql2 = @sql2 + ')'

SELECT @sql1, @sql2

SET NOCOUNT OFF
RETURN 0

The Most Important DVM Queries You Will Ever Use


-- check memory

SELECT

(Physical_memory_in_bytes/1024.0)/1024.0 AS Physical_memory_in_Mb

FROM

sys.dm_os_sys_info

--in the view  

-- CPU Count: Number of logical CPUs in the server

--    Hyperthread-ratio: Ratio of logical and physical CPUs

--    Physical_memory_in_bytes: Amount of physical memory available

--    Virtual_memory_in_bytes: Amount of virtual memory available

--    Bpool_commited: Committed physical memory in buffer pool

--    OS_Priority_class: Priority class for SQL Server process

--    Max_workers_thread: Maximum number of workers which can be created

select * from sys.dm_os_memory_cache_clock_hands

where type in ('CACHESTORE_OBJCP','CACHESTORE_SQLCP','CACHESTORE_PHDR','CACHESTORE_XPROC')

--<<<<<<<<<<----------------------------------------------------------------->>>>>>>>>>--

--DMV_All-Stars.sql

    --Jimmy May 317.590.8650

    --A.C.E. Performance Team

    --jimmymay@microsoft.com

    --aspiringgeek@live.com

    --http://blogs.msdn.com/jimmymay

--Table of Contents

--1. expensive queries

--2. wait stats

--3. virtual file stats (& virtual file latency)

--4. plan cache interrogation

--5. real-time blockers

--<<<<<<<<<<----------------------------------------------------------------->>>>>>>>>>--

--Weasel Clause: This script is provided "AS IS" with no warranties, and confers no rights.

--  Use of included script samples are subject to the terms specified at

--  http://www.microsoft.com/info/cpyright.htm

--<<<<<<<<<<----------------------------------------------------------------->>>>>>>>>>--

--1. expensive queries

    --text *and* statement

    --usage: modify WHERE & ORDER BY clauses to suit circumstances

SELECT TOP 25

      -- the following four columns are NULL for ad hoc and prepared batches

      DB_Name(qp.dbid) as dbname , qp.dbid , qp.objectid , qp.number

    --, qp.query_plan --the query plan can be *very* useful; enable if desired

    , qt.text

    , SUBSTRING(qt.text, (qs.statement_start_offset/2) + 1,

        ((CASE statement_end_offset

            WHEN -1 THEN DATALENGTH(qt.text)

            ELSE qs.statement_end_offset END

                - qs.statement_start_offset)/2) + 1) as statement_text

    , qs.creation_time , qs.last_execution_time , qs.execution_count

    , qs.total_worker_time    / qs.execution_count as avg_worker_time

    , qs.total_physical_reads / qs.execution_count as avg_physical_reads

    , qs.total_logical_reads  / qs.execution_count as avg_logical_reads

    , qs.total_logical_writes / qs.execution_count as avg_logical_writes

    , qs.total_elapsed_time   / qs.execution_count as avg_elapsed_time

    , qs.total_clr_time       / qs.execution_count as avg_clr_time

    , qs.total_worker_time , qs.last_worker_time , qs.min_worker_time , qs.max_worker_time

    , qs.total_physical_reads , qs.last_physical_reads , qs.min_physical_reads , qs.max_physical_reads

    , qs.total_logical_reads , qs.last_logical_reads , qs.min_logical_reads , qs.max_logical_reads

    , qs.total_logical_writes , qs.last_logical_writes , qs.min_logical_writes , qs.max_logical_writes

    , qs.total_elapsed_time , qs.last_elapsed_time , qs.min_elapsed_time , qs.max_elapsed_time

    , qs.total_clr_time , qs.last_clr_time , qs.min_clr_time , qs.max_clr_time

    --, qs.sql_handle , qs.statement_start_offset , qs.statement_end_offset

    , qs.plan_generation_num  -- , qp.encrypted

    FROM sys.dm_exec_query_stats as qs

    CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) as qp

    CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) as qt

    --WHERE...

    --ORDER BY qs.execution_count      DESC  --Frequency

      ORDER BY qs.total_worker_time    DESC  --CPU

    --ORDER BY qs.total_elapsed_time   DESC  --Durn

    --ORDER BY qs.total_logical_reads  DESC  --Reads

    --ORDER BY qs.total_logical_writes DESC  --Writes

    --ORDER BY qs.total_physical_reads DESC  --PhysicalReads   

    --ORDER BY avg_worker_time         DESC  --AvgCPU

    --ORDER BY avg_elapsed_time        DESC  --AvgDurn    

    --ORDER BY avg_logical_reads       DESC  --AvgReads

    --ORDER BY avg_logical_writes      DESC  --AvgWrites

    --ORDER BY avg_physical_reads      DESC  --AvgPhysicalReads

    --sample WHERE clauses

    --WHERE last_execution_time > '20070507 15:00'

    --WHERE execution_count = 1

    --  WHERE SUBSTRING(qt.text, (qs.statement_start_offset/2) + 1,

    --    ((CASE statement_end_offset

    --        WHEN -1 THEN DATALENGTH(qt.text)

    --        ELSE qs.statement_end_offset END

    --            - qs.statement_start_offset)/2) + 1)

    --      LIKE '%MyText%'

--<<<<<<<<<<----------------------------------------------------------------->>>>>>>>>>--

--2. wait stats

--2000

--DBCC sqlperf(waitstats,CLEAR)               --re-initialize waitstats

DBCC sqlperf(waitstats)

--2005

--DBCC sqlperf('sys.dm_os_wait_stats',CLEAR)  --re-initialize waitstats

SELECT * , (wait_time_ms - signal_wait_time_ms) as resource_wait_time_ms

         , signal_wait_time_per_wait

             = CASE WHEN waiting_tasks_count = 0

                    THEN 0 ELSE (signal_wait_time_ms/waiting_tasks_count) END

         , resource_wait_time_per_wait

             = CASE WHEN waiting_tasks_count = 0

                    THEN 0 ELSE ((wait_time_ms - signal_wait_time_ms)/waiting_tasks_count) END

    FROM sys.dm_os_wait_stats

    ORDER BY resource_wait_time_ms DESC

    --ORDER BY wait_time_ms DESC

    --ORDER BY signal_wait_time_ms DESC

    --ORDER BY waiting_tasks_count DESC

    --ORDER BY max_wait_time_ms DESC

--<<<<<<<<<<----------------------------------------------------------------->>>>>>>>>>--

--3. virtual file stats

--2000

SELECT DB_NAME(DBID) as DbName

    , (IoStallMS/(NumberReads + NumberWrites)) as IsStall

    , * FROM ::fn_virtualfilestats(-1,-1)

--2005

SELECT

    --virtual file latency

    vReadLatency

        = CASE WHEN num_of_reads = 0

               THEN 0 ELSE (io_stall_read_ms/num_of_reads) END

  , vWriteLatency

       = CASE WHEN num_of_writes = 0

              THEN 0 ELSE (io_stall_write_ms/num_of_writes) END

  , vLatency

      = CASE WHEN (num_of_reads = 0 AND num_of_writes = 0)

             THEN 0 ELSE (io_stall/(num_of_reads + num_of_writes)) END

  --avg bytes per IOP

  , BytesperRead

        = CASE WHEN num_of_reads = 0

               THEN 0 ELSE (num_of_bytes_read/num_of_reads) END

  , BytesperWrite

       = CASE WHEN num_of_writes = 0

              THEN 0 ELSE (num_of_bytes_written/num_of_writes) END

  , BytesperTransfer

      = CASE WHEN (num_of_reads = 0 AND num_of_writes = 0)

             THEN 0 ELSE ((num_of_bytes_read+num_of_bytes_written)/(num_of_reads + num_of_writes)) END

            

  , LEFT(mf.physical_name,2) as Drive

  , DB_NAME(vfs.database_id) as DB

  --, mf.name AS FileName

  , vfs.*

  , mf.physical_name

  FROM sys.dm_io_virtual_file_stats(NULL,NULL) as vfs

    JOIN sys.master_files as mf ON vfs.database_id = mf.database_id AND vfs.file_id = mf.file_id

  --WHERE vfs.file_id = 2 -- log files

 WHERE DB_NAME(vfs.database_id) IN ('tpcc','tpcc2')

  ORDER BY vLatency DESC

  -- ORDER BY vReadLatency DESC

  -- ORDER BY vWriteLatency DESC

--<<<<<<<<<<----------------------------------------------------------------->>>>>>>>>>--

--4. plan cache interrogation

-- note: sys.dm_exec_cached_plans is diminutive version of syscacheobjects

    -- no dbid, setopts

-- we want reusable code, absence of ad hoc SQL

-- we want relatively few rows with low usecounts

--2000  

SELECT cacheobjtype , objtype , usecounts , pagesused , dbid , sql

    FROM master.dbo.syscacheobjects

    WHERE cacheobjtype = 'Compiled Plan'

    ORDER BY usecounts DESC

    --ORDER BY sql

--2005

SELECT c.cacheobjtype , c.objtype , c.usecounts , c.size_in_bytes , t.dbid , t.text

    FROM sys.dm_exec_cached_plans as c

        CROSS APPLY sys.dm_exec_sql_text(plan_handle) as t

    WHERE c.cacheobjtype = 'Compiled Plan'

    ORDER BY c.usecounts DESC

    --ORDER BY t.text

--<<<<<<<<<<----------------------------------------------------------------->>>>>>>>>>--

--5. real-time blockers

    --Report Blocker and Waiter SQL Statements

    --http://www.microsoft.com/technet/scriptcenter/scripts/sql/sql2005/trans/sql05vb044.mspx?mfr=true

    -- SQLCAT BPT

SELECT

    t1.resource_type as lock_type

  , db_name(resource_database_id) as DB

  , t1.resource_associated_entity_id as blkd_obj

  , t1.request_mode as lock_req          -- lock requested

  , t1.request_session_id as waiter_sid-- spid of waiter

  , t2.wait_duration_ms as waittime

  , (SELECT text FROM sys.dm_exec_requests as r  --- get sql for waiter

        CROSS APPLY sys.dm_exec_sql_text(r.sql_handle)

            WHERE r.session_id = t1.request_session_id) as waiter_batch

  , (SELECT SUBSTRING(qt.text , r.statement_start_offset/2

          , (CASE WHEN r.statement_end_offset = -1

                THEN LEN(CONVERT(nvarchar(MAX), qt.text)) * 2

                ELSE r.statement_end_offset END - r.statement_start_offset)/2)

        FROM sys.dm_exec_requests as r

            CROSS APPLY sys.dm_exec_sql_text(r.sql_handle) as qt

            WHERE r.session_id = t1.request_session_id) as waiter_stmt    --- this is the statement executing right now

   , t2.blocking_session_id as blocker_sid -- spid of blocker

   , (SELECT text FROM sys.sysprocesses as p       --- get sql for blocker

            CROSS APPLY sys.dm_exec_sql_text(p.sql_handle)

            WHERE p.spid = t2.blocking_session_id) as blocker_stmt

FROM sys.dm_tran_locks as t1

    JOIN sys.dm_os_waiting_tasks as t2

        ON t1.lock_owner_address = t2.resource_address

-- Get Top 100 executed SP's ordered by execution count

    SELECT TOP 100 qt.text AS 'SP Name', qs.execution_count AS 'Execution Count',

    qs.execution_count/DATEDIFF(Second, qs.creation_time, GetDate()) AS 'Calls/Second',

    qs.total_worker_time/qs.execution_count AS 'AvgWorkerTime',

    qs.total_worker_time AS 'TotalWorkerTime',

    qs.total_elapsed_time/qs.execution_count AS 'AvgElapsedTime',

    qs.max_logical_reads, qs.max_logical_writes, qs.total_physical_reads,

    DATEDIFF(Minute, qs.creation_time, GetDate()) AS 'Age in Cache'

    FROM sys.dm_exec_query_stats AS qs

    CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS qt

    WHERE qt.dbid = db_id() -- Filter by current database

    ORDER BY qs.execution_count DESC

-- Get Top 20 executed SP's ordered by total worker time (CPU pressure)

    SELECT TOP 20 qt.text AS 'SP Name', qs.total_worker_time AS 'TotalWorkerTime',

    qs.total_worker_time/qs.execution_count AS 'AvgWorkerTime',

    qs.execution_count AS 'Execution Count',

    ISNULL(qs.execution_count/DATEDIFF(Second, qs.creation_time, GetDate()), 0) AS 'Calls/Second',

    ISNULL(qs.total_elapsed_time/qs.execution_count, 0) AS 'AvgElapsedTime',

    qs.max_logical_reads, qs.max_logical_writes,

    DATEDIFF(Minute, qs.creation_time, GetDate()) AS 'Age in Cache'

    FROM sys.dm_exec_query_stats AS qs

    CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS qt

    WHERE qt.dbid = db_id() -- Filter by current database

    ORDER BY qs.total_worker_time DESC

-- Get Top 20 executed SP's ordered by logical reads (memory pressure)

    SELECT TOP 20 qt.text AS 'SP Name', total_logical_reads,

    qs.execution_count AS 'Execution Count', total_logical_reads/qs.execution_count AS 'AvgLogicalReads',

    qs.execution_count/DATEDIFF(Second, qs.creation_time, GetDate()) AS 'Calls/Second',

    qs.total_worker_time/qs.execution_count AS 'AvgWorkerTime',

    qs.total_worker_time AS 'TotalWorkerTime',

    qs.total_elapsed_time/qs.execution_count AS 'AvgElapsedTime',

    qs.total_logical_writes,

    qs.max_logical_reads, qs.max_logical_writes, qs.total_physical_reads,

    DATEDIFF(Minute, qs.creation_time, GetDate()) AS 'Age in Cache', qt.dbid

    FROM sys.dm_exec_query_stats AS qs

    CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS qt

    WHERE qt.dbid = db_id() -- Filter by current database

    ORDER BY total_logical_reads DESC

 -- Get Top 20 executed SP's ordered by physical reads (read I/O pressure)

    SELECT TOP 20 qt.text AS 'SP Name', qs.total_physical_reads, qs.total_physical_reads/qs.execution_count AS 'Avg Physical Reads',

    qs.execution_count AS 'Execution Count',

    qs.execution_count/DATEDIFF(Second, qs.creation_time, GetDate()) AS 'Calls/Second',

    qs.total_worker_time/qs.execution_count AS 'AvgWorkerTime',

    qs.total_worker_time AS 'TotalWorkerTime',

    qs.total_elapsed_time/qs.execution_count AS 'AvgElapsedTime',

    qs.max_logical_reads, qs.max_logical_writes,

    DATEDIFF(Minute, qs.creation_time, GetDate()) AS 'Age in Cache', qt.dbid

    FROM sys.dm_exec_query_stats AS qs

    CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS qt

    WHERE qt.dbid = db_id() -- Filter by current database

    ORDER BY qs.total_physical_reads DESC

   

   

     -- Get Top 20 executed SP's ordered by logical writes/minute

    SELECT TOP 20 qt.text AS 'SP Name', qs.total_logical_writes, qs.total_logical_writes/qs.execution_count AS 'AvgLogicalWrites',

    qs.total_logical_writes/DATEDIFF(Minute, qs.creation_time, GetDate()) AS 'Logical Writes/Min',

    qs.execution_count AS 'Execution Count',

    qs.execution_count/DATEDIFF(Second, qs.creation_time, GetDate()) AS 'Calls/Second',

    qs.total_worker_time/qs.execution_count AS 'AvgWorkerTime',

    qs.total_worker_time AS 'TotalWorkerTime',

    qs.total_elapsed_time/qs.execution_count AS 'AvgElapsedTime',

    qs.max_logical_reads, qs.max_logical_writes, qs.total_physical_reads,

    DATEDIFF(Minute, qs.creation_time, GetDate()) AS 'Age in Cache',

    qs.total_physical_reads/qs.execution_count AS 'Avg Physical Reads', qt.dbid

    FROM sys.dm_exec_query_stats AS qs

    CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS qt

    WHERE qt.dbid = db_id() -- Filter by current database

    ORDER BY qs.total_logical_writes DESC

--get I/O statistics by database

    select db_name(m.database_id) as database_name,

     sum(fs.num_of_reads + fs.num_of_writes) as total_io,

        sum(fs.num_of_reads) as num_of_reads,

     sum(fs.num_of_bytes_read) as num_of_bytes_read,

     sum(fs.num_of_writes) as num_of_writes,

     sum(fs.num_of_bytes_written) as num_of_bytes_written 

    from sys.dm_io_virtual_file_stats(NULL, NULL) fs

     join sys.master_files m on fs.database_id = m.database_id and fs.file_id = m.file_id

    group by db_name(m.database_id)

    order by sum(fs.num_of_reads + fs.num_of_writes) desc

--SQL Server: How do I identify the queries that are worth tuning?

    select st.text, qp.query_plan, cp.cacheobjtype, cp.objtype, cp.plan_handle from sys.dm_exec_cached_plans cp cross apply sys.dm_exec_sql_text(cp.plan_handle) st cross apply sys.dm_exec_query_plan(cp.plan_handle) qp