WebsiteSpark

Wednesday 4 January 2012

Shrink database

Many a times you may have wondered how to reclaim the file or disk space allocated to data or log files, which may have grown beyond your capacity planning.
There are various reasons for log files to grow without any warning and in an abrupt manner and once a log file has grown, there is no way for SQL server to return back the unutilized log space back to Windows. This has to be a manual process, initiated by SQL administrator.
Let’s start by creating a new database to play with and then look at the various options which we have for shrinking up the database and database files.
Let’s create a database and we ensure that it is in simple recovery model so as to avoid unnecessary log file space issue as we are going to do enough inserts and deletes to the database so as to make this database a big one and mainly a fragmented one.

Use master;
go

/*
Create a database TestDB to play with.
*/

if DB_ID('TestDB') > 0
Drop database TestDB;
go
/*
--Same thing can be acheived with
if DATABASEPROPERTYEX('TestDB','version') > 0
drop database TestDB
*/

Create database TestDB;
go

/* Check for recovery model, if DB is in full recovery model,
then change to Simple, to reduce the unnecessary log overhead
*/
If DATABASEPROPERTYEX('TestDB','Recovery') = 'FULL'
Begin
      alter database TestDB set  recovery simple ;
End
go



Now we have the database, we will create the table for us.

use TestDB;
go

/*Create table to play with*/
create table TestTbl (
Col1 varchar(250) primary key clustered,
col2 varchar(7750)
);
go



Remember we are creating a varchar clustered index.
We don’t want SQL server to return the count so we set nocount on. And then start inserting our first round of data in a while loop, within a transaction.

/* We unnecessarly do not want the count getting displayed */
Set Nocount on;
go


/*
Insert huge amount of data in loop within a transaction
to increase the log & database file
*/

Begin tran;
go

Declare @i int;
set @i = 0;

While (@i < 200000)
Begin
      insert into
            TestTbl (col1,col2)
      Values
            ( replicate ('A',100) + convert( varchar(20),@i),   replicate('A', @i % 800));

      Set @i = @i + 1;
End;
go

Commit tran;
go



Let’s check the log file, here we use DBCC LogInfo for checking log file, then we issue a checkpoint and then we see what happens to our log file.

dbcc loginfo;
go


checkpoint;
go

dbcc loginfo;
go


DBCC ShowContig;
Go


SELECT * FROM sys.dm_db_index_physical_stats (
    DB_ID ('TestDB'), OBJECT_ID ('TestTbl'), 1, NULL, 'DETAILED');
GO

The difference between logInfo output before and checkpoint is that most of the log VLFs having active status (status = 2) is been refreshed and now have status 0, SQL server has written all the transaction information to the data file appropriately (how this happens is open for a separate discussion).
Dbcc ShowContig displays the fragmentation level on our table. So does the DMV sys.dm_db_index_physical_stats. The sys.dm_db_index_physical_stats DMV displays much more details about our index than the ShowContig.
Some of the interesting things to look out for in the sys.dm_db_index_physical_stats DMV are
1.  Avg_Fragmentation_in_Percent
2.  Index_depth (indicates the number of levels in our index)
3.  Index_level (one row per index level of index 0 => the leaf node and highest number the topmost node).
4.    Pagecount (the number of pages at each corresponding level).

Now let’s do some deletes

/*
Some deletes
*/

Begin tran;
go

declare @i int;
Set @i = 0;

While(@i < 5000)
Begin

      if( @i % 2 = 1)
            delete from
                  TestTbl
            Where
                  col2 = replicate('A', @i % 800);

            Set @i = @i + 1;
End

commit tran;
go
-- this should leave 100000 rows



More inserts and some more random deletes this took almost 15 to 20 mins on my laptop.

/*
Some more inserts
*/

Begin tran;
go

Declare @i int;
set @i = 0;

While (@i < 400000)
Begin
      insert into
            TestTbl (col1,col2)
      Values
            (replicate ('B',100) + convert( varchar(20),@i), replicate('B', @i % 2000));  
      Set @i = @i + 1;
End;
go

Commit tran;
go


--now we have 500000 rows

/*
Again somemore deletes
*/
Begin tran;
go

declare @i int;
Set @i = 0;

While(@i < 2000)
Begin

      if( @i % 2 = 1)
            delete from
                  TestTbl    
            Where
                  col2 = replicate('B', @i % 2000);

      Set @i = @i + 1;
End;
go

commit tran;
go

-- now we have 300000 rows

Now let’s see how fragmented our database is

dbcc showcontig;

/*------------------------
dbcc showcontig;
------------------------*/
DBCC SHOWCONTIG scanning 'TestTbl' table...
Table: 'TestTbl' (21575115); index ID: 1, database ID: 8
TABLE level scan performed.
- Pages Scanned................................: 88036
- Extents Scanned..............................: 12193
- Extent Switches..............................: 37413
- Avg. Pages per Extent........................: 7.2
- Scan Density [Best Count:Actual Count].......: 29.41% [11005:37414]
- Logical Scan Fragmentation ..................: 77.19%
- Extent Scan Fragmentation ...................: 0.01%
- Avg. Bytes Free per Page.....................: 4962.2
- Avg. Page Density (full).....................: 38.69%
DBCC execution completed. If DBCC printed error messages, contact your system administrator.




On an average we have pages with almost 60% free space. And the data and log files each are around 800 MB in size.
Now time to shrink database

use master;
go
-- shrinking database with 20% free space
dbcc shrinkdatabase (TestDB,20);
go



After shrinking, the data file remains almost the same but the log file drastically reduces. Shrink database command tries to reduce the file size for the entire database, keeping the specified percent amount vacant for each file. In above case we specified 20% free.
Shrinking the data file is a bad idea, should be the last resort with a DBA as this may increase the fragmentation within the indexes.
Let’s see the effect of shrink database on our table.

use testdb;
go

DBCC ShowContig;
go

/*------------------------
use testdb;
go

DBCC ShowContig;
go
------------------------*/
DBCC SHOWCONTIG scanning 'TestTbl' table...
Table: 'TestTbl' (21575115); index ID: 1, database ID: 8
TABLE level scan performed.
- Pages Scanned................................: 88036
- Extents Scanned..............................: 12193
- Extent Switches..............................: 37413
- Avg. Pages per Extent........................: 7.2
- Scan Density [Best Count:Actual Count].......: 29.41% [11005:37414]
- Logical Scan Fragmentation ..................: 77.19%
- Extent Scan Fragmentation ...................: 0.01%
- Avg. Bytes Free per Page.....................: 4962.2
- Avg. Page Density (full).....................: 38.69%
DBCC execution completed. If DBCC printed error messages, contact your system administrator.


Now let’s compress our primary clustered index.
And see the result of shrinking the database

Alter index PK__TestTbl__A259EE5403317E3D on TestTbl
rebuild With (DATA_COMPRESSION = ROW);
go

DBCC ShowContig;
/*
/*------------------------
DBCC ShowContig;
------------------------*/
DBCC SHOWCONTIG scanning 'TestTbl' table...
Table: 'TestTbl' (21575115); index ID: 1, database ID: 8
TABLE level scan performed.
- Pages Scanned................................: 36979
- Extents Scanned..............................: 4624
- Extent Switches..............................: 4623
- Avg. Pages per Extent........................: 8.0
- Scan Density [Best Count:Actual Count].......: 99.98% [4623:4624]
- Logical Scan Fragmentation ..................: 0.01%
- Extent Scan Fragmentation ...................: 0.39%
- Avg. Bytes Free per Page.....................: 659.8
- Avg. Page Density (full).....................: 91.85%
DBCC execution completed. If DBCC printed error messages, contact your system administrator.
*/



SELECT * FROM sys.dm_db_index_physical_stats (
    DB_ID ('TestDB'), OBJECT_ID ('TestTbl'), Null, NULL, 'DETAILED');
GO


After index rebuild (with row data compression), we achieved 91.85% average page density. Now again inserting some amount of data to increase the transaction log file, to demonstrate DBCC ShrinkFile command.

begin tran;
go

Declare @i int;
set @i = 0;

While (@i < 200000)
Begin
      insert into
            TestTbl (col1,col2)
      Values
            (replicate ('C',100) + convert( varchar(20),@i), replicate('C', @i % 2000));  
      Set @i = @i + 1;
End;
go

Commit tran;
go



At any point in time to check the row count in the table use the following on a separate instance

select count(1) from TestTbl with (nolock);
go


Issue shrink file command

select *  FROM sys.database_files;
go

DBCC SHRINKFILE (TestDB_log,1);
go


dbcc loginfo;
go


You will see the log file getting reduced to the amount of MB specified. If in case you forget the name of log file, you can determine using sys.database_files DMV.
It’s advised to shrink the log file if it grows beyond capacity and you need to reclaim the disk space occupied by it, rather than shrinking the entire database, which may cause additional fragmentation.
Finally cleanup

/*
Finally time for some cleanup
*/

use master;
go

Alter database TestDB
set single_user with rollback immediate;
go

drop database TestDB;
go




Enjoy!

No comments:

Post a Comment