Showing posts with label size. Show all posts
Showing posts with label size. Show all posts

Thursday, March 29, 2012

Data Files and one FileGroup

Hi,
Is it OK if we have one FileGroup but 3 data files in different disk drives?
We have a database of size 150G, two data fils are in the same disk drive
and the third data file is in a different disk driver. All are in the same
filegroup.
Shouldn't we create another file group for the third data file?
Thanks,
RosieRosie,
You're probably fine. Multiple files in a filegroup is common. MS
recommends one file per physical processor to help parrallelism. One thing
to look out for is for best performance you want the files to file at the
same rate. Autogrow and adding additional files often leads to hot spots on
the IO subsystem.
"Rosie" <Rosie@.discussions.microsoft.com> wrote in message
news:D194C8A5-5B94-4CBA-A994-4E78EA69E6B9@.microsoft.com...
> Hi,
> Is it OK if we have one FileGroup but 3 data files in different disk
> drives?
> We have a database of size 150G, two data fils are in the same disk drive
> and the third data file is in a different disk driver. All are in the same
> filegroup.
> Shouldn't we create another file group for the third data file?
> Thanks,
> Rosie
>
>|||inline
--
Sunil Agarwal (MSFT]
This posting is provided "AS IS" with no warranties, and confers no rights.
"Rosie" <Rosie@.discussions.microsoft.com> wrote in message
news:D194C8A5-5B94-4CBA-A994-4E78EA69E6B9@.microsoft.com...
> Hi,
> Is it OK if we have one FileGroup but 3 data files in different disk
> drives?
sunila> Yes. This will help you distribute the IO load across multiple
spindles.
> We have a database of size 150G, two data fils are in the same disk drive
> and the third data file is in a different disk driver. All are in the same
> filegroup.
Sunila> I will be curious to know why created two files belonging to same
file group on the same disk?
> Shouldn't we create another file group for the third data file?
Sunila> SQL Server allows creating file group that contains files from one
or more physical disks.
> Thanks,
> Rosie
>
>|||> You're probably fine. Multiple files in a filegroup is common. MS recommends one file per
> physical processor to help parrallelism.
Interesting... Do you have a link to that statement? So far, the only thing I've seen is that you
might want to consider the number of physical disks. Somewhere between 1 file per disk to 1 file per
8 disks. MS haven't given out a firm recommendation as it is very hw dependent.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Danny" <someone@.nowhere.com> wrote in message news:CIPef.19373$%t4.15694@.trnddc07...
> Rosie,
> You're probably fine. Multiple files in a filegroup is common. MS recommends one file per
> physical processor to help parrallelism. One thing to look out for is for best performance you
> want the files to file at the same rate. Autogrow and adding additional files often leads to hot
> spots on the IO subsystem.
>
> "Rosie" <Rosie@.discussions.microsoft.com> wrote in message
> news:D194C8A5-5B94-4CBA-A994-4E78EA69E6B9@.microsoft.com...
>> Hi,
>> Is it OK if we have one FileGroup but 3 data files in different disk drives?
>> We have a database of size 150G, two data fils are in the same disk drive
>> and the third data file is in a different disk driver. All are in the same
>> filegroup.
>> Shouldn't we create another file group for the third data file?
>> Thanks,
>> Rosie
>>
>

Data Files and one FileGroup

Hi,
Is it OK if we have one FileGroup but 3 data files in different disk drives?
We have a database of size 150G, two data fils are in the same disk drive
and the third data file is in a different disk driver. All are in the same
filegroup.
Shouldn't we create another file group for the third data file?
Thanks,
Rosie
Rosie,
You're probably fine. Multiple files in a filegroup is common. MS
recommends one file per physical processor to help parrallelism. One thing
to look out for is for best performance you want the files to file at the
same rate. Autogrow and adding additional files often leads to hot spots on
the IO subsystem.
"Rosie" <Rosie@.discussions.microsoft.com> wrote in message
news:D194C8A5-5B94-4CBA-A994-4E78EA69E6B9@.microsoft.com...
> Hi,
> Is it OK if we have one FileGroup but 3 data files in different disk
> drives?
> We have a database of size 150G, two data fils are in the same disk drive
> and the third data file is in a different disk driver. All are in the same
> filegroup.
> Shouldn't we create another file group for the third data file?
> Thanks,
> Rosie
>
>
|||inline
Sunil Agarwal (MSFT]
This posting is provided "AS IS" with no warranties, and confers no rights.
"Rosie" <Rosie@.discussions.microsoft.com> wrote in message
news:D194C8A5-5B94-4CBA-A994-4E78EA69E6B9@.microsoft.com...
> Hi,
> Is it OK if we have one FileGroup but 3 data files in different disk
> drives?
sunila> Yes. This will help you distribute the IO load across multiple
spindles.

> We have a database of size 150G, two data fils are in the same disk drive
> and the third data file is in a different disk driver. All are in the same
> filegroup.
Sunila> I will be curious to know why created two files belonging to same
file group on the same disk?
> Shouldn't we create another file group for the third data file?
Sunila> SQL Server allows creating file group that contains files from one
or more physical disks.

> Thanks,
> Rosie
>
>
|||> You're probably fine. Multiple files in a filegroup is common. MS recommends one file per
> physical processor to help parrallelism.
Interesting... Do you have a link to that statement? So far, the only thing I've seen is that you
might want to consider the number of physical disks. Somewhere between 1 file per disk to 1 file per
8 disks. MS haven't given out a firm recommendation as it is very hw dependent.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Danny" <someone@.nowhere.com> wrote in message news:CIPef.19373$%t4.15694@.trnddc07...
> Rosie,
> You're probably fine. Multiple files in a filegroup is common. MS recommends one file per
> physical processor to help parrallelism. One thing to look out for is for best performance you
> want the files to file at the same rate. Autogrow and adding additional files often leads to hot
> spots on the IO subsystem.
>
> "Rosie" <Rosie@.discussions.microsoft.com> wrote in message
> news:D194C8A5-5B94-4CBA-A994-4E78EA69E6B9@.microsoft.com...
>

Data Files and one FileGroup

Hi,
Is it OK if we have one FileGroup but 3 data files in different disk drives?
We have a database of size 150G, two data fils are in the same disk drive
and the third data file is in a different disk driver. All are in the same
filegroup.
Shouldn't we create another file group for the third data file?
Thanks,
RosieRosie,
You're probably fine. Multiple files in a filegroup is common. MS
recommends one file per physical processor to help parrallelism. One thing
to look out for is for best performance you want the files to file at the
same rate. Autogrow and adding additional files often leads to hot spots on
the IO subsystem.
"Rosie" <Rosie@.discussions.microsoft.com> wrote in message
news:D194C8A5-5B94-4CBA-A994-4E78EA69E6B9@.microsoft.com...
> Hi,
> Is it OK if we have one FileGroup but 3 data files in different disk
> drives?
> We have a database of size 150G, two data fils are in the same disk drive
> and the third data file is in a different disk driver. All are in the same
> filegroup.
> Shouldn't we create another file group for the third data file?
> Thanks,
> Rosie
>
>|||inline
--
Sunil Agarwal (MSFT]
This posting is provided "AS IS" with no warranties, and confers no rights.
"Rosie" <Rosie@.discussions.microsoft.com> wrote in message
news:D194C8A5-5B94-4CBA-A994-4E78EA69E6B9@.microsoft.com...
> Hi,
> Is it OK if we have one FileGroup but 3 data files in different disk
> drives?
sunila> Yes. This will help you distribute the IO load across multiple
spindles.

> We have a database of size 150G, two data fils are in the same disk drive
> and the third data file is in a different disk driver. All are in the same
> filegroup.
Sunila> I will be curious to know why created two files belonging to same
file group on the same disk?
> Shouldn't we create another file group for the third data file?
Sunila> SQL Server allows creating file group that contains files from one
or more physical disks.

> Thanks,
> Rosie
>
>|||> You're probably fine. Multiple files in a filegroup is common. MS recommends one file pe
r
> physical processor to help parrallelism.
Interesting... Do you have a link to that statement? So far, the only thing
I've seen is that you
might want to consider the number of physical disks. Somewhere between 1 fil
e per disk to 1 file per
8 disks. MS haven't given out a firm recommendation as it is very hw depende
nt.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Danny" <someone@.nowhere.com> wrote in message news:CIPef.19373$%t4.15694@.trnddc07...seagreen">
> Rosie,
> You're probably fine. Multiple files in a filegroup is common. MS recomm
ends one file per
> physical processor to help parrallelism. One thing to look out for is for
best performance you
> want the files to file at the same rate. Autogrow and adding additional f
iles often leads to hot
> spots on the IO subsystem.
>
> "Rosie" <Rosie@.discussions.microsoft.com> wrote in message
> news:D194C8A5-5B94-4CBA-A994-4E78EA69E6B9@.microsoft.com...
>

Data file size used space

In the SQL EM, we are able to get the data file size with
used space and free space details. Similarly for the log
file also.
We have the dbcc sqlperf command to find out the log file
used space details. Is there any way to find out the
data file size? sp_helpfile just gives the total size of
the data file wherein sp_spaceused is also not
distinguishing between used and free space.
Could anyone help me out in this?Do a google search on DBCC SHOWFILESTATS. This is what EM is using. Not documented, so all usual
warnings apply.
--
Tibor Karaszi, SQL Server MVP
Archive at: http://groups.google.com/groups?oi=djq&as_ugroup=microsoft.public.sqlserver
"Mangai" <anonymous@.discussions.microsoft.com> wrote in message
news:08fc01c3ad95$4cf28380$a301280a@.phx.gbl...
> In the SQL EM, we are able to get the data file size with
> used space and free space details. Similarly for the log
> file also.
> We have the dbcc sqlperf command to find out the log file
> used space details. Is there any way to find out the
> data file size? sp_helpfile just gives the total size of
> the data file wherein sp_spaceused is also not
> distinguishing between used and free space.
> Could anyone help me out in this?

data file size problem

how can i change the initial size of the data and log file size ?

in my database properties it shows that my data file size is 81 mb and log file size is 985 mb! but my database only contains some tables and stored procedures with few rows of data in each table

and i checked that the actual mdf and ldf files are really that big... i tried to change it but it didn't work...

can someone please teach me how to change it thanks!

Backup the DB with Truncate log. check books online for more info.

|||

http://www.codeproject.com/useritems/truncate_log_SQL_server.asp

Data File size issue

Hello everyone,
I have a problem with the data files on one of my databases. I was trying
to manage the file size for a secondary data file (that had blown up on me
because there were no growth limits set initially) by trying something on a
hunch. It was probably a mistake to this, but I wanted to reduce the size of
this file by transferring only part of the data from this file to the other
files in the filegroup. Here's an illustration of what I did:
Original Scenario:
Used Space Free Space Total Space
-- -- --
File 1 18945 MB 0 MB 18945 MB
File 2 87052 MB 10.8 MB 87062 MB
I wanted to reduce the size of secondary data file File 2. So, I created
File 3 on a third Server hard drive (File 1 & 2 are on 2 separate drives),
set growth limits of 30000 MB each for File 1 and File 3, and decided to try
and empty the data from File 2. The theory behind this being that SQL Server
would try to empty File 2, but will only be able to transfer up to the growth
limits set on Files 1 & 2. This way, I will have more evenly displaced usage
of data space. Well, here is what it now looks like:
Used Space Free Space Total Space
-- -- --
File 1 29854 MB 0 MB 29854 MB
File 2 46251 MB 40774 MB 87025 MB
File 3 29652 MB 0 MB 29652 MB
This is all fine and dandy except the fact that I am unable to shrink File
2. No matter what I try, I am unable to get the 40 GB of free space in this
file back to the Operating System. This is an extremely frustrating problem,
and I would appreciate any advice. Please help!!
Thank you!
What about creating two secondary files instead one and using "dbcc
shrinkfile (file_2, , EMPTYFILE)" so the data from file_2 is migrated to the
other files in the filegroup and then deleting file_2 using "alter database".
AMB
"Shishir Viriyala" wrote:

> Hello everyone,
> I have a problem with the data files on one of my databases. I was trying
> to manage the file size for a secondary data file (that had blown up on me
> because there were no growth limits set initially) by trying something on a
> hunch. It was probably a mistake to this, but I wanted to reduce the size of
> this file by transferring only part of the data from this file to the other
> files in the filegroup. Here's an illustration of what I did:
> Original Scenario:
> Used Space Free Space Total Space
> -- -- --
> File 1 18945 MB 0 MB 18945 MB
> File 2 87052 MB 10.8 MB 87062 MB
> I wanted to reduce the size of secondary data file File 2. So, I created
> File 3 on a third Server hard drive (File 1 & 2 are on 2 separate drives),
> set growth limits of 30000 MB each for File 1 and File 3, and decided to try
> and empty the data from File 2. The theory behind this being that SQL Server
> would try to empty File 2, but will only be able to transfer up to the growth
> limits set on Files 1 & 2. This way, I will have more evenly displaced usage
> of data space. Well, here is what it now looks like:
> Used Space Free Space Total Space
> -- -- --
> File 1 29854 MB 0 MB 29854 MB
> File 2 46251 MB 40774 MB 87025 MB
> File 3 29652 MB 0 MB 29652 MB
> This is all fine and dandy except the fact that I am unable to shrink File
> 2. No matter what I try, I am unable to get the 40 GB of free space in this
> file back to the Operating System. This is an extremely frustrating problem,
> and I would appreciate any advice. Please help!!
> Thank you!
|||Did you try shrinking the file using dbcc shrinkfile?
AMB
"Alejandro Mesa" wrote:
[vbcol=seagreen]
> What about creating two secondary files instead one and using "dbcc
> shrinkfile (file_2, , EMPTYFILE)" so the data from file_2 is migrated to the
> other files in the filegroup and then deleting file_2 using "alter database".
>
> AMB
> "Shishir Viriyala" wrote:
|||I suspect you need to have some free swing space in another file to
accomplish the operation?
"Shishir Viriyala" wrote:

> Hello everyone,
> I have a problem with the data files on one of my databases. I was trying
> to manage the file size for a secondary data file (that had blown up on me
> because there were no growth limits set initially) by trying something on a
> hunch. It was probably a mistake to this, but I wanted to reduce the size of
> this file by transferring only part of the data from this file to the other
> files in the filegroup. Here's an illustration of what I did:
> Original Scenario:
> Used Space Free Space Total Space
> -- -- --
> File 1 18945 MB 0 MB 18945 MB
> File 2 87052 MB 10.8 MB 87062 MB
> I wanted to reduce the size of secondary data file File 2. So, I created
> File 3 on a third Server hard drive (File 1 & 2 are on 2 separate drives),
> set growth limits of 30000 MB each for File 1 and File 3, and decided to try
> and empty the data from File 2. The theory behind this being that SQL Server
> would try to empty File 2, but will only be able to transfer up to the growth
> limits set on Files 1 & 2. This way, I will have more evenly displaced usage
> of data space. Well, here is what it now looks like:
> Used Space Free Space Total Space
> -- -- --
> File 1 29854 MB 0 MB 29854 MB
> File 2 46251 MB 40774 MB 87025 MB
> File 3 29652 MB 0 MB 29652 MB
> This is all fine and dandy except the fact that I am unable to shrink File
> 2. No matter what I try, I am unable to get the 40 GB of free space in this
> file back to the Operating System. This is an extremely frustrating problem,
> and I would appreciate any advice. Please help!!
> Thank you!
|||Yep, I have used both Enterpise Manager and Transact SQL (DBCC SHRINKFILE
method) to try and shrink this file. Nothing changes on the file however. I
will try emptying File 2 into another data file and see what happens. Hope
it works. Thanks for the idea!
"Alejandro Mesa" wrote:
[vbcol=seagreen]
> Did you try shrinking the file using dbcc shrinkfile?
>
> AMB
> "Alejandro Mesa" wrote:
|||Alejandro,
I tried to create additional data files, and then tried to empty File 2, but
nothing happened. In Enterprise Manager, I got a confirmation message saying
the file was shrunk successfully, but the File remains at the same size.
Then I tried DBCC SHRINKFILE (EMPTYFILE), and again I didnt receive any
errors, but my file's size remains the same. Something is seriously wrong,
and I am not sure how to troubleshoot. Any ideas?
Thanks!
"Shishir Viriyala" wrote:
[vbcol=seagreen]
> Yep, I have used both Enterpise Manager and Transact SQL (DBCC SHRINKFILE
> method) to try and shrink this file. Nothing changes on the file however. I
> will try emptying File 2 into another data file and see what happens. Hope
> it works. Thanks for the idea!
> "Alejandro Mesa" wrote:
|||Jeff,
I managed to find just enough space on the disk to create a new data file
that could hold all of my problem data file. However, referring to my
previous post in this thread, the file has stopped responding to any shrink
attempts. Not sure what to do at the moment.....
Shishir
"Jeffrey K. Ericson" wrote:
[vbcol=seagreen]
> I suspect you need to have some free swing space in another file to
> accomplish the operation?
> "Shishir Viriyala" wrote:
sql

Data File size issue

Hello everyone,
I have a problem with the data files on one of my databases. I was trying
to manage the file size for a secondary data file (that had blown up on me
because there were no growth limits set initially) by trying something on a
hunch. It was probably a mistake to this, but I wanted to reduce the size o
f
this file by transferring only part of the data from this file to the other
files in the filegroup. Here's an illustration of what I did:
Original Scenario:
Used Space Free Space Total Space
-- -- --
File 1 18945 MB 0 MB 18945 MB
File 2 87052 MB 10.8 MB 87062 MB
I wanted to reduce the size of secondary data file File 2. So, I created
File 3 on a third Server hard drive (File 1 & 2 are on 2 separate drives),
set growth limits of 30000 MB each for File 1 and File 3, and decided to try
and empty the data from File 2. The theory behind this being that SQL Serve
r
would try to empty File 2, but will only be able to transfer up to the growt
h
limits set on Files 1 & 2. This way, I will have more evenly displaced usag
e
of data space. Well, here is what it now looks like:
Used Space Free Space Total Space
-- -- --
File 1 29854 MB 0 MB 29854 MB
File 2 46251 MB 40774 MB 87025 MB
File 3 29652 MB 0 MB 29652 MB
This is all fine and dandy except the fact that I am unable to shrink File
2. No matter what I try, I am unable to get the 40 GB of free space in this
file back to the Operating System. This is an extremely frustrating problem
,
and I would appreciate any advice. Please help!!
Thank you!What about creating two secondary files instead one and using "dbcc
shrinkfile (file_2, , EMPTYFILE)" so the data from file_2 is migrated to the
other files in the filegroup and then deleting file_2 using "alter database"
.
AMB
"Shishir Viriyala" wrote:

> Hello everyone,
> I have a problem with the data files on one of my databases. I was trying
> to manage the file size for a secondary data file (that had blown up on me
> because there were no growth limits set initially) by trying something on
a
> hunch. It was probably a mistake to this, but I wanted to reduce the size
of
> this file by transferring only part of the data from this file to the othe
r
> files in the filegroup. Here's an illustration of what I did:
> Original Scenario:
> Used Space Free Space Total Space
> -- -- --
> File 1 18945 MB 0 MB 18945 MB
> File 2 87052 MB 10.8 MB 87062 MB
> I wanted to reduce the size of secondary data file File 2. So, I created
> File 3 on a third Server hard drive (File 1 & 2 are on 2 separate drives),
> set growth limits of 30000 MB each for File 1 and File 3, and decided to t
ry
> and empty the data from File 2. The theory behind this being that SQL Ser
ver
> would try to empty File 2, but will only be able to transfer up to the gro
wth
> limits set on Files 1 & 2. This way, I will have more evenly displaced us
age
> of data space. Well, here is what it now looks like:
> Used Space Free Space Total Space
> -- -- --
> File 1 29854 MB 0 MB 29854 MB
> File 2 46251 MB 40774 MB 87025 MB
> File 3 29652 MB 0 MB 29652 MB
> This is all fine and dandy except the fact that I am unable to shrink File
> 2. No matter what I try, I am unable to get the 40 GB of free space in th
is
> file back to the Operating System. This is an extremely frustrating probl
em,
> and I would appreciate any advice. Please help!!
> Thank you!|||Did you try shrinking the file using dbcc shrinkfile?
AMB
"Alejandro Mesa" wrote:
[vbcol=seagreen]
> What about creating two secondary files instead one and using "dbcc
> shrinkfile (file_2, , EMPTYFILE)" so the data from file_2 is migrated to t
he
> other files in the filegroup and then deleting file_2 using "alter databas
e".
>
> AMB
> "Shishir Viriyala" wrote:
>|||I suspect you need to have some free swing space in another file to
accomplish the operation?
"Shishir Viriyala" wrote:

> Hello everyone,
> I have a problem with the data files on one of my databases. I was trying
> to manage the file size for a secondary data file (that had blown up on me
> because there were no growth limits set initially) by trying something on
a
> hunch. It was probably a mistake to this, but I wanted to reduce the size
of
> this file by transferring only part of the data from this file to the othe
r
> files in the filegroup. Here's an illustration of what I did:
> Original Scenario:
> Used Space Free Space Total Space
> -- -- --
> File 1 18945 MB 0 MB 18945 MB
> File 2 87052 MB 10.8 MB 87062 MB
> I wanted to reduce the size of secondary data file File 2. So, I created
> File 3 on a third Server hard drive (File 1 & 2 are on 2 separate drives),
> set growth limits of 30000 MB each for File 1 and File 3, and decided to t
ry
> and empty the data from File 2. The theory behind this being that SQL Ser
ver
> would try to empty File 2, but will only be able to transfer up to the gro
wth
> limits set on Files 1 & 2. This way, I will have more evenly displaced us
age
> of data space. Well, here is what it now looks like:
> Used Space Free Space Total Space
> -- -- --
> File 1 29854 MB 0 MB 29854 MB
> File 2 46251 MB 40774 MB 87025 MB
> File 3 29652 MB 0 MB 29652 MB
> This is all fine and dandy except the fact that I am unable to shrink File
> 2. No matter what I try, I am unable to get the 40 GB of free space in th
is
> file back to the Operating System. This is an extremely frustrating probl
em,
> and I would appreciate any advice. Please help!!
> Thank you!|||Yep, I have used both Enterpise Manager and Transact SQL (DBCC SHRINKFILE
method) to try and shrink this file. Nothing changes on the file however.
I
will try emptying File 2 into another data file and see what happens. Hope
it works. Thanks for the idea!
"Alejandro Mesa" wrote:
[vbcol=seagreen]
> Did you try shrinking the file using dbcc shrinkfile?
>
> AMB
> "Alejandro Mesa" wrote:
>|||Alejandro,
I tried to create additional data files, and then tried to empty File 2, but
nothing happened. In Enterprise Manager, I got a confirmation message sayin
g
the file was shrunk successfully, but the File remains at the same size.
Then I tried DBCC SHRINKFILE (EMPTYFILE), and again I didnt receive any
errors, but my file's size remains the same. Something is seriously wrong,
and I am not sure how to troubleshoot. Any ideas?
Thanks!
"Shishir Viriyala" wrote:
[vbcol=seagreen]
> Yep, I have used both Enterpise Manager and Transact SQL (DBCC SHRINKFILE
> method) to try and shrink this file. Nothing changes on the file however.
I
> will try emptying File 2 into another data file and see what happens. Hop
e
> it works. Thanks for the idea!
> "Alejandro Mesa" wrote:
>|||Jeff,
I managed to find just enough space on the disk to create a new data file
that could hold all of my problem data file. However, referring to my
previous post in this thread, the file has stopped responding to any shrink
attempts. Not sure what to do at the moment.....
Shishir
"Jeffrey K. Ericson" wrote:
[vbcol=seagreen]
> I suspect you need to have some free swing space in another file to
> accomplish the operation?
> "Shishir Viriyala" wrote:
>

Data File size issue

Hello everyone,
I have a problem with the data files on one of my databases. I was trying
to manage the file size for a secondary data file (that had blown up on me
because there were no growth limits set initially) by trying something on a
hunch. It was probably a mistake to this, but I wanted to reduce the size of
this file by transferring only part of the data from this file to the other
files in the filegroup. Here's an illustration of what I did:
Original Scenario:
Used Space Free Space Total Space
-- -- --
File 1 18945 MB 0 MB 18945 MB
File 2 87052 MB 10.8 MB 87062 MB
I wanted to reduce the size of secondary data file File 2. So, I created
File 3 on a third Server hard drive (File 1 & 2 are on 2 separate drives),
set growth limits of 30000 MB each for File 1 and File 3, and decided to try
and empty the data from File 2. The theory behind this being that SQL Server
would try to empty File 2, but will only be able to transfer up to the growth
limits set on Files 1 & 2. This way, I will have more evenly displaced usage
of data space. Well, here is what it now looks like:
Used Space Free Space Total Space
-- -- --
File 1 29854 MB 0 MB 29854 MB
File 2 46251 MB 40774 MB 87025 MB
File 3 29652 MB 0 MB 29652 MB
This is all fine and dandy except the fact that I am unable to shrink File
2. No matter what I try, I am unable to get the 40 GB of free space in this
file back to the Operating System. This is an extremely frustrating problem,
and I would appreciate any advice. Please help!!
Thank you!What about creating two secondary files instead one and using "dbcc
shrinkfile (file_2, , EMPTYFILE)" so the data from file_2 is migrated to the
other files in the filegroup and then deleting file_2 using "alter database".
AMB
"Shishir Viriyala" wrote:
> Hello everyone,
> I have a problem with the data files on one of my databases. I was trying
> to manage the file size for a secondary data file (that had blown up on me
> because there were no growth limits set initially) by trying something on a
> hunch. It was probably a mistake to this, but I wanted to reduce the size of
> this file by transferring only part of the data from this file to the other
> files in the filegroup. Here's an illustration of what I did:
> Original Scenario:
> Used Space Free Space Total Space
> -- -- --
> File 1 18945 MB 0 MB 18945 MB
> File 2 87052 MB 10.8 MB 87062 MB
> I wanted to reduce the size of secondary data file File 2. So, I created
> File 3 on a third Server hard drive (File 1 & 2 are on 2 separate drives),
> set growth limits of 30000 MB each for File 1 and File 3, and decided to try
> and empty the data from File 2. The theory behind this being that SQL Server
> would try to empty File 2, but will only be able to transfer up to the growth
> limits set on Files 1 & 2. This way, I will have more evenly displaced usage
> of data space. Well, here is what it now looks like:
> Used Space Free Space Total Space
> -- -- --
> File 1 29854 MB 0 MB 29854 MB
> File 2 46251 MB 40774 MB 87025 MB
> File 3 29652 MB 0 MB 29652 MB
> This is all fine and dandy except the fact that I am unable to shrink File
> 2. No matter what I try, I am unable to get the 40 GB of free space in this
> file back to the Operating System. This is an extremely frustrating problem,
> and I would appreciate any advice. Please help!!
> Thank you!|||Did you try shrinking the file using dbcc shrinkfile?
AMB
"Alejandro Mesa" wrote:
> What about creating two secondary files instead one and using "dbcc
> shrinkfile (file_2, , EMPTYFILE)" so the data from file_2 is migrated to the
> other files in the filegroup and then deleting file_2 using "alter database".
>
> AMB
> "Shishir Viriyala" wrote:
> > Hello everyone,
> >
> > I have a problem with the data files on one of my databases. I was trying
> > to manage the file size for a secondary data file (that had blown up on me
> > because there were no growth limits set initially) by trying something on a
> > hunch. It was probably a mistake to this, but I wanted to reduce the size of
> > this file by transferring only part of the data from this file to the other
> > files in the filegroup. Here's an illustration of what I did:
> >
> > Original Scenario:
> > Used Space Free Space Total Space
> > -- -- --
> > File 1 18945 MB 0 MB 18945 MB
> > File 2 87052 MB 10.8 MB 87062 MB
> >
> > I wanted to reduce the size of secondary data file File 2. So, I created
> > File 3 on a third Server hard drive (File 1 & 2 are on 2 separate drives),
> > set growth limits of 30000 MB each for File 1 and File 3, and decided to try
> > and empty the data from File 2. The theory behind this being that SQL Server
> > would try to empty File 2, but will only be able to transfer up to the growth
> > limits set on Files 1 & 2. This way, I will have more evenly displaced usage
> > of data space. Well, here is what it now looks like:
> >
> > Used Space Free Space Total Space
> > -- -- --
> > File 1 29854 MB 0 MB 29854 MB
> > File 2 46251 MB 40774 MB 87025 MB
> > File 3 29652 MB 0 MB 29652 MB
> >
> > This is all fine and dandy except the fact that I am unable to shrink File
> > 2. No matter what I try, I am unable to get the 40 GB of free space in this
> > file back to the Operating System. This is an extremely frustrating problem,
> > and I would appreciate any advice. Please help!!
> >
> > Thank you!|||I suspect you need to have some free swing space in another file to
accomplish the operation?
"Shishir Viriyala" wrote:
> Hello everyone,
> I have a problem with the data files on one of my databases. I was trying
> to manage the file size for a secondary data file (that had blown up on me
> because there were no growth limits set initially) by trying something on a
> hunch. It was probably a mistake to this, but I wanted to reduce the size of
> this file by transferring only part of the data from this file to the other
> files in the filegroup. Here's an illustration of what I did:
> Original Scenario:
> Used Space Free Space Total Space
> -- -- --
> File 1 18945 MB 0 MB 18945 MB
> File 2 87052 MB 10.8 MB 87062 MB
> I wanted to reduce the size of secondary data file File 2. So, I created
> File 3 on a third Server hard drive (File 1 & 2 are on 2 separate drives),
> set growth limits of 30000 MB each for File 1 and File 3, and decided to try
> and empty the data from File 2. The theory behind this being that SQL Server
> would try to empty File 2, but will only be able to transfer up to the growth
> limits set on Files 1 & 2. This way, I will have more evenly displaced usage
> of data space. Well, here is what it now looks like:
> Used Space Free Space Total Space
> -- -- --
> File 1 29854 MB 0 MB 29854 MB
> File 2 46251 MB 40774 MB 87025 MB
> File 3 29652 MB 0 MB 29652 MB
> This is all fine and dandy except the fact that I am unable to shrink File
> 2. No matter what I try, I am unable to get the 40 GB of free space in this
> file back to the Operating System. This is an extremely frustrating problem,
> and I would appreciate any advice. Please help!!
> Thank you!|||Yep, I have used both Enterpise Manager and Transact SQL (DBCC SHRINKFILE
method) to try and shrink this file. Nothing changes on the file however. I
will try emptying File 2 into another data file and see what happens. Hope
it works. Thanks for the idea!
"Alejandro Mesa" wrote:
> Did you try shrinking the file using dbcc shrinkfile?
>
> AMB
> "Alejandro Mesa" wrote:
> > What about creating two secondary files instead one and using "dbcc
> > shrinkfile (file_2, , EMPTYFILE)" so the data from file_2 is migrated to the
> > other files in the filegroup and then deleting file_2 using "alter database".
> >
> >
> > AMB
> >
> > "Shishir Viriyala" wrote:
> >
> > > Hello everyone,
> > >
> > > I have a problem with the data files on one of my databases. I was trying
> > > to manage the file size for a secondary data file (that had blown up on me
> > > because there were no growth limits set initially) by trying something on a
> > > hunch. It was probably a mistake to this, but I wanted to reduce the size of
> > > this file by transferring only part of the data from this file to the other
> > > files in the filegroup. Here's an illustration of what I did:
> > >
> > > Original Scenario:
> > > Used Space Free Space Total Space
> > > -- -- --
> > > File 1 18945 MB 0 MB 18945 MB
> > > File 2 87052 MB 10.8 MB 87062 MB
> > >
> > > I wanted to reduce the size of secondary data file File 2. So, I created
> > > File 3 on a third Server hard drive (File 1 & 2 are on 2 separate drives),
> > > set growth limits of 30000 MB each for File 1 and File 3, and decided to try
> > > and empty the data from File 2. The theory behind this being that SQL Server
> > > would try to empty File 2, but will only be able to transfer up to the growth
> > > limits set on Files 1 & 2. This way, I will have more evenly displaced usage
> > > of data space. Well, here is what it now looks like:
> > >
> > > Used Space Free Space Total Space
> > > -- -- --
> > > File 1 29854 MB 0 MB 29854 MB
> > > File 2 46251 MB 40774 MB 87025 MB
> > > File 3 29652 MB 0 MB 29652 MB
> > >
> > > This is all fine and dandy except the fact that I am unable to shrink File
> > > 2. No matter what I try, I am unable to get the 40 GB of free space in this
> > > file back to the Operating System. This is an extremely frustrating problem,
> > > and I would appreciate any advice. Please help!!
> > >
> > > Thank you!|||Alejandro,
I tried to create additional data files, and then tried to empty File 2, but
nothing happened. In Enterprise Manager, I got a confirmation message saying
the file was shrunk successfully, but the File remains at the same size.
Then I tried DBCC SHRINKFILE (EMPTYFILE), and again I didnt receive any
errors, but my file's size remains the same. Something is seriously wrong,
and I am not sure how to troubleshoot. Any ideas?
Thanks!
"Shishir Viriyala" wrote:
> Yep, I have used both Enterpise Manager and Transact SQL (DBCC SHRINKFILE
> method) to try and shrink this file. Nothing changes on the file however. I
> will try emptying File 2 into another data file and see what happens. Hope
> it works. Thanks for the idea!
> "Alejandro Mesa" wrote:
> > Did you try shrinking the file using dbcc shrinkfile?
> >
> >
> > AMB
> >
> > "Alejandro Mesa" wrote:
> >
> > > What about creating two secondary files instead one and using "dbcc
> > > shrinkfile (file_2, , EMPTYFILE)" so the data from file_2 is migrated to the
> > > other files in the filegroup and then deleting file_2 using "alter database".
> > >
> > >
> > > AMB
> > >
> > > "Shishir Viriyala" wrote:
> > >
> > > > Hello everyone,
> > > >
> > > > I have a problem with the data files on one of my databases. I was trying
> > > > to manage the file size for a secondary data file (that had blown up on me
> > > > because there were no growth limits set initially) by trying something on a
> > > > hunch. It was probably a mistake to this, but I wanted to reduce the size of
> > > > this file by transferring only part of the data from this file to the other
> > > > files in the filegroup. Here's an illustration of what I did:
> > > >
> > > > Original Scenario:
> > > > Used Space Free Space Total Space
> > > > -- -- --
> > > > File 1 18945 MB 0 MB 18945 MB
> > > > File 2 87052 MB 10.8 MB 87062 MB
> > > >
> > > > I wanted to reduce the size of secondary data file File 2. So, I created
> > > > File 3 on a third Server hard drive (File 1 & 2 are on 2 separate drives),
> > > > set growth limits of 30000 MB each for File 1 and File 3, and decided to try
> > > > and empty the data from File 2. The theory behind this being that SQL Server
> > > > would try to empty File 2, but will only be able to transfer up to the growth
> > > > limits set on Files 1 & 2. This way, I will have more evenly displaced usage
> > > > of data space. Well, here is what it now looks like:
> > > >
> > > > Used Space Free Space Total Space
> > > > -- -- --
> > > > File 1 29854 MB 0 MB 29854 MB
> > > > File 2 46251 MB 40774 MB 87025 MB
> > > > File 3 29652 MB 0 MB 29652 MB
> > > >
> > > > This is all fine and dandy except the fact that I am unable to shrink File
> > > > 2. No matter what I try, I am unable to get the 40 GB of free space in this
> > > > file back to the Operating System. This is an extremely frustrating problem,
> > > > and I would appreciate any advice. Please help!!
> > > >
> > > > Thank you!|||Jeff,
I managed to find just enough space on the disk to create a new data file
that could hold all of my problem data file. However, referring to my
previous post in this thread, the file has stopped responding to any shrink
attempts. Not sure what to do at the moment.....
Shishir
"Jeffrey K. Ericson" wrote:
> I suspect you need to have some free swing space in another file to
> accomplish the operation?
> "Shishir Viriyala" wrote:
> > Hello everyone,
> >
> > I have a problem with the data files on one of my databases. I was trying
> > to manage the file size for a secondary data file (that had blown up on me
> > because there were no growth limits set initially) by trying something on a
> > hunch. It was probably a mistake to this, but I wanted to reduce the size of
> > this file by transferring only part of the data from this file to the other
> > files in the filegroup. Here's an illustration of what I did:
> >
> > Original Scenario:
> > Used Space Free Space Total Space
> > -- -- --
> > File 1 18945 MB 0 MB 18945 MB
> > File 2 87052 MB 10.8 MB 87062 MB
> >
> > I wanted to reduce the size of secondary data file File 2. So, I created
> > File 3 on a third Server hard drive (File 1 & 2 are on 2 separate drives),
> > set growth limits of 30000 MB each for File 1 and File 3, and decided to try
> > and empty the data from File 2. The theory behind this being that SQL Server
> > would try to empty File 2, but will only be able to transfer up to the growth
> > limits set on Files 1 & 2. This way, I will have more evenly displaced usage
> > of data space. Well, here is what it now looks like:
> >
> > Used Space Free Space Total Space
> > -- -- --
> > File 1 29854 MB 0 MB 29854 MB
> > File 2 46251 MB 40774 MB 87025 MB
> > File 3 29652 MB 0 MB 29652 MB
> >
> > This is all fine and dandy except the fact that I am unable to shrink File
> > 2. No matter what I try, I am unable to get the 40 GB of free space in this
> > file back to the Operating System. This is an extremely frustrating problem,
> > and I would appreciate any advice. Please help!!
> >
> > Thank you!

Data File Size Increase

Hi,
Currently we are having 20 databases and mergereplicating the databases
also.
But only for 2 databases the DATAFile size got increased.
Please give what may be the reason for that.
Please give me the Solution as early as Possible
Thanks
SouraHi
Do you have autogrow feature enabled?
Look at ALTER DATABASE command to increase a size if the file
"SouRa" <SouRa@.discussions.microsoft.com> wrote in message
news:371E1A9F-F22C-497E-A548-CA8EBE68D833@.microsoft.com...
> Hi,
> Currently we are having 20 databases and mergereplicating the
databases
> also.
> But only for 2 databases the DATAFile size got increased.
> Please give what may be the reason for that.
> Please give me the Solution as early as Possible
> Thanks
> Soura
>|||Perhaps there was free space for the other database's database data files to accommodate the
modifications?
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
http://www.sqlug.se/
"SouRa" <SouRa@.discussions.microsoft.com> wrote in message
news:371E1A9F-F22C-497E-A548-CA8EBE68D833@.microsoft.com...
> Hi,
> Currently we are having 20 databases and mergereplicating the databases
> also.
> But only for 2 databases the DATAFile size got increased.
> Please give what may be the reason for that.
> Please give me the Solution as early as Possible
> Thanks
> Soura
>|||Hi,
Also check if you have restricted file growth to the other databases in the
databases option tab.
CU
Andreas
"Uri Dimant" wrote:
> Hi
> Do you have autogrow feature enabled?
> Look at ALTER DATABASE command to increase a size if the file
>
> "SouRa" <SouRa@.discussions.microsoft.com> wrote in message
> news:371E1A9F-F22C-497E-A548-CA8EBE68D833@.microsoft.com...
> > Hi,
> > Currently we are having 20 databases and mergereplicating the
> databases
> > also.
> >
> > But only for 2 databases the DATAFile size got increased.
> >
> > Please give what may be the reason for that.
> >
> > Please give me the Solution as early as Possible
> >
> > Thanks
> > Soura
> >
> >
>
>

Data File Size Increase

Hi,
Currently we are having 20 databases and mergereplicating the databases
also.
But only for 2 databases the DATAFile size got increased.
Please give what may be the reason for that.
Please give me the Solution as early as Possible
Thanks
Soura
Hi
Do you have autogrow feature enabled?
Look at ALTER DATABASE command to increase a size if the file
"SouRa" <SouRa@.discussions.microsoft.com> wrote in message
news:371E1A9F-F22C-497E-A548-CA8EBE68D833@.microsoft.com...
> Hi,
> Currently we are having 20 databases and mergereplicating the
databases
> also.
> But only for 2 databases the DATAFile size got increased.
> Please give what may be the reason for that.
> Please give me the Solution as early as Possible
> Thanks
> Soura
>
|||Perhaps there was free space for the other database's database data files to accommodate the
modifications?
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
http://www.sqlug.se/
"SouRa" <SouRa@.discussions.microsoft.com> wrote in message
news:371E1A9F-F22C-497E-A548-CA8EBE68D833@.microsoft.com...
> Hi,
> Currently we are having 20 databases and mergereplicating the databases
> also.
> But only for 2 databases the DATAFile size got increased.
> Please give what may be the reason for that.
> Please give me the Solution as early as Possible
> Thanks
> Soura
>
|||Hi,
Also check if you have restricted file growth to the other databases in the
databases option tab.
CU
Andreas
"Uri Dimant" wrote:

> Hi
> Do you have autogrow feature enabled?
> Look at ALTER DATABASE command to increase a size if the file
>
> "SouRa" <SouRa@.discussions.microsoft.com> wrote in message
> news:371E1A9F-F22C-497E-A548-CA8EBE68D833@.microsoft.com...
> databases
>
>

Data File Size Increase

Hi,
Currently we are having 20 databases and mergereplicating the databases
also.
But only for 2 databases the DATAFile size got increased.
Please give what may be the reason for that.
Please give me the Solution as early as Possible
Thanks
SouraHi
Do you have autogrow feature enabled?
Look at ALTER DATABASE command to increase a size if the file
"SouRa" <SouRa@.discussions.microsoft.com> wrote in message
news:371E1A9F-F22C-497E-A548-CA8EBE68D833@.microsoft.com...
> Hi,
> Currently we are having 20 databases and mergereplicating the
databases
> also.
> But only for 2 databases the DATAFile size got increased.
> Please give what may be the reason for that.
> Please give me the Solution as early as Possible
> Thanks
> Soura
>|||Perhaps there was free space for the other database's database data files to
accommodate the
modifications?
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
http://www.sqlug.se/
"SouRa" <SouRa@.discussions.microsoft.com> wrote in message
news:371E1A9F-F22C-497E-A548-CA8EBE68D833@.microsoft.com...
> Hi,
> Currently we are having 20 databases and mergereplicating the database
s
> also.
> But only for 2 databases the DATAFile size got increased.
> Please give what may be the reason for that.
> Please give me the Solution as early as Possible
> Thanks
> Soura
>|||Hi,
Also check if you have restricted file growth to the other databases in the
databases option tab.
CU
Andreas
"Uri Dimant" wrote:

> Hi
> Do you have autogrow feature enabled?
> Look at ALTER DATABASE command to increase a size if the file
>
> "SouRa" <SouRa@.discussions.microsoft.com> wrote in message
> news:371E1A9F-F22C-497E-A548-CA8EBE68D833@.microsoft.com...
> databases
>
>sql

Data File Size

SELECT size_in_mb,used_size_in_mb,size_in_mb-used_size_in_mb as free_in_mb FROM (
SELECT cntr_value/1024 size_in_mb ,
(SELECT cntr_value/1024 FROM master..sysperfinfo WHERE counter_name='Log File(s) Used Size (KB)' AND instance_name='mydb') used_size_in_mb
FROM master..sysperfinfO WHERE counter_name='Log File(s) Size (KB)' AND INSTANCE_NAME='mydb'
) a

I need to store totalsize,usedsize,freesize of the datafiles in a table to get an average of how much my datafile has increased over a week.
The above query i am using is for logfile size. Can any one help me with datafile size plz.
I've checked sp_helpfile, sysfiles but couldn't find what i am lookin for(used and free space). EM in taskpad view for a database shows the statistics for the datafile. I've tried a trace to find out a stored procedure but couldn't!!!
May be i am unaware of a simple stored-procedure that can do this for me.

Howdy!use master
go
sp_helptext sp_spaceused
go

Maybe you will get some ideas from here ... am a little busy ... so just help yourself ...|||You can use code from sp_spaceused to make your own logic ...|||use master
go
sp_helptext sp_spaceused
go

Maybe you will get some ideas from here ... am a little busy ... so just help yourself ...

Thanx for guiding; i would try.

Howdy!

Data File size

My datafile is at 80GB. The data used in the datafile is about 70GB and
growing, with approximately 4,000,000 records in a table that contains a
Text field. I have a weekly purge that deletes about 500,000 records. (Disk
space is at a premium.) When I delete the 500,000 records in the table, the
data used in the data file does not reduce, but remains the same and keeps
on growing. I need to keep the datafile comfortably below 80GB.
My understanding is that shrinking the data file is not the best thing to
do, but if I have to, then I will. So essentially, to keep the data file
comfortably below 80GB I would:
1. Perform the weekly purge (deleting approximately 500,000 records).
2. Shrink the datafile "dbcc shrinkfile ( filename, TRUNCATEONLY)"
Would the above 2 steps be the means of keeping my datafile comfortably
below 80GB?
Message posted via http://www.droptable.com
You can turn off auto-grow option and lock your
datafile size on 80Gb. Make sure you have enough space
for your transaction log though in order to be able to
delete records weekly.
Shrink is really not good operation especially on large DBs.
Regards.
"Robert Richards via droptable.com" wrote:

> My datafile is at 80GB. The data used in the datafile is about 70GB and
> growing, with approximately 4,000,000 records in a table that contains a
> Text field. I have a weekly purge that deletes about 500,000 records. (Disk
> space is at a premium.) When I delete the 500,000 records in the table, the
> data used in the data file does not reduce, but remains the same and keeps
> on growing. I need to keep the datafile comfortably below 80GB.
> My understanding is that shrinking the data file is not the best thing to
> do, but if I have to, then I will. So essentially, to keep the data file
> comfortably below 80GB I would:
> 1. Perform the weekly purge (deleting approximately 500,000 records).
> 2. Shrink the datafile "dbcc shrinkfile ( filename, TRUNCATEONLY)"
> Would the above 2 steps be the means of keeping my datafile comfortably
> below 80GB?
> --
> Message posted via http://www.droptable.com
>
|||That does not sound like a solution, turning off auto grow. The reason why
is that at the current rate, the data file is growing and will eventually
fill up the 80GB. The purge is not reducing space in the data file.
Probably due to a high water mark. Therefore, even though I am keeping the
number of records at 4,000,000 or below, the unused space never gets
recovered, thus the datafile keeps growing even though the number of
records remains approximately the same.
I understand that ShrinkFile is not the best option, but what other option
is there?
Message posted via http://www.droptable.com
|||Hi,
What you are performing is absolutely perfect. Do a weekly purge or move the
old data into a history database.
Since you are going to store the data back into the same file you may not
need to shrink the file because the space you
purged will be utilized for the new data which is coming in.
What is the main reason you need to keep 80GB as maximum for your database?
There are many databases in the globe with more than Terabytes
and running smoothly.
Thanks
Hari
Sql Server MVP
"Robert Richards via droptable.com" <forum@.nospam.droptable.com> wrote in
message news:0be9557414954ceca658ef750e5bb45e@.droptable.co m...
> My datafile is at 80GB. The data used in the datafile is about 70GB and
> growing, with approximately 4,000,000 records in a table that contains a
> Text field. I have a weekly purge that deletes about 500,000 records.
> (Disk
> space is at a premium.) When I delete the 500,000 records in the table,
> the
> data used in the data file does not reduce, but remains the same and keeps
> on growing. I need to keep the datafile comfortably below 80GB.
> My understanding is that shrinking the data file is not the best thing to
> do, but if I have to, then I will. So essentially, to keep the data file
> comfortably below 80GB I would:
> 1. Perform the weekly purge (deleting approximately 500,000 records).
> 2. Shrink the datafile "dbcc shrinkfile ( filename, TRUNCATEONLY)"
> Would the above 2 steps be the means of keeping my datafile comfortably
> below 80GB?
> --
> Message posted via http://www.droptable.com
|||I do not believe I am being understood. When I delete the 500,000, it is
not to an archive table. These 500,000 records are gone.
The 80GB max is due to available disk space.
When I delete records, I need to reclaim the unused space. Are the two
steps below a way of doing that?
1. Perform the weekly purge (deleting approximately 500,000 records).
2. Shrink the datafile "dbcc shrinkfile ( filename, TRUNCATEONLY)"
Is there a better way to reclaim unused space?
Message posted via http://www.droptable.com
|||What you are seeing is mainly due to the fact you have text or image
columns. Even though you delete rows it may not always free up all the
space it previously used for the text columns. If you have a clustered
index on the table you can reindex the table and possibly free up some space
that may have been due to fragmentation of the non-text columns. But in
2000 there is nothing you can do to clean up text space usage short of
exporting all the data, truncating the table and importing it back in. SQL
2005 will allow you to reorganize blobs. Your real solution is to get more
disk space so you can deal with it better.
Andrew J. Kelly SQL MVP
"Robert Richards via droptable.com" <forum@.nospam.droptable.com> wrote in
message news:0be9557414954ceca658ef750e5bb45e@.droptable.co m...
> My datafile is at 80GB. The data used in the datafile is about 70GB and
> growing, with approximately 4,000,000 records in a table that contains a
> Text field. I have a weekly purge that deletes about 500,000 records.
> (Disk
> space is at a premium.) When I delete the 500,000 records in the table,
> the
> data used in the data file does not reduce, but remains the same and keeps
> on growing. I need to keep the datafile comfortably below 80GB.
> My understanding is that shrinking the data file is not the best thing to
> do, but if I have to, then I will. So essentially, to keep the data file
> comfortably below 80GB I would:
> 1. Perform the weekly purge (deleting approximately 500,000 records).
> 2. Shrink the datafile "dbcc shrinkfile ( filename, TRUNCATEONLY)"
> Would the above 2 steps be the means of keeping my datafile comfortably
> below 80GB?
> --
> Message posted via http://www.droptable.com
|||Why then does the "data used" decrease on the data file when I delete
records from the same table in my Test environment, but when I delete
records from the table in production the "data used" does not decrease?
Message posted via http://www.droptable.com
|||How are you determining the "data used"? Are you using sp_spaceused? Have
you tried running DBCC UPDATE USAGE or specifying the UpdateUasge option?
Andrew J. Kelly SQL MVP
"Robert Richards via droptable.com" <forum@.droptable.com> wrote in message
news:ce78b9bd5c5447658b8a0e2badaf82cb@.droptable.co m...
> Why then does the "data used" decrease on the data file when I delete
> records from the same table in my Test environment, but when I delete
> records from the table in production the "data used" does not decrease?
> --
> Message posted via http://www.droptable.com
|||Also, the TRUCATE ONLY option will not Free Up the space becaue truncate
will only remove from the end of the file. You need to MOVE all the data to
the head of the file before shrinking. You can only do that if you use the
DBCC SHRINKFILE(MyDB_Data, TargetSize).
Reindexing the clustered index will most likely cause the file to grow even
more because the entire table is basically copied to a new when as it
reindexes. However, if you reindex, that will defragment by the index
order. Then if you shrink the file, it will move the data pages in
defragmented order, but could take quite some time, especially on a 80 GB
database.
You should consider creating 1 to 1 relationships for the LOB data and their
associated base tables. Then you could create VIEWs to replace the original
table definitions to minimize code impact. Then put all of the LOB data in
a seperate table(s). Those tables could then be created in seperate files
on seperate FileGroups. The shrinking and reordering just those files
should minimize the durations of those operations.
If you want to garauntee what you are looking at, run sp_spaceused
@.UPDATEUSAGE = 'true'.
Sincerely,
Anthony Thomas

"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:ecOFGg9ZFHA.3168@.TK2MSFTNGP10.phx.gbl...
How are you determining the "data used"? Are you using sp_spaceused? Have
you tried running DBCC UPDATE USAGE or specifying the UpdateUasge option?
Andrew J. Kelly SQL MVP
"Robert Richards via droptable.com" <forum@.droptable.com> wrote in message
news:ce78b9bd5c5447658b8a0e2badaf82cb@.droptable.co m...
> Why then does the "data used" decrease on the data file when I delete
> records from the same table in my Test environment, but when I delete
> records from the table in production the "data used" does not decrease?
> --
> Message posted via http://www.droptable.com
|||I am still not clear how or why my data continues to increase. What you say
I cannot quite comprehend given my current data set. For instance:
5/21/2005
Records in table: 4,091,571
Data used within data file: 64,546.3 MB
6/5/2005
Records in table: 3,679,559
Data used within data file: 71,828.5 MB
So even though I have 412,012 less records in the table (the only other
tables are small, static lookup tables) the data within the data file has
grown 7,282.2 MB.
I guess I just do not understand the text column well enough, to understand
why the substantial growth, despite the significantly less records in the
table.
Please help, as I am going to have to explain this to my supervisor in such
a way to either justify more disk space, or resolve the growth issue
despite less records. Thanks for your patience.
Message posted via http://www.droptable.com

Data File Size

Thanks Guys,
I understand that if the database is at a point that it's forced to autogrow
that will degrade performance while the database is growing. I want to know
what the impact of a say 90-95% full database is. This is with out an
autogrow situation. There will be updates, inserts, and reads but we never
reach full capicity. Does a database say that is 60-75% full perfom better
than a database that is 90-95% full or is there no known impact.
"TheSQLGuru" wrote:

> Best practice is to size your database proactively and only let autogrowth
> act in 'emergency' cases. Size the database to allow for 1-2 years of
> expected growth, and revisit at least every 6 months. This will allow for
> maintenance operations like index rebuilds/reorgs to be able to lay data
> down sequentially on disk for optimal performance and also avoid autogrowth
> slowdowns during peak periods.
> --
> Kevin G. Boles
> Indicium Resources, Inc.
> SQL Server MVP
> kgboles a earthlink dt net
>
> "JDS" <JDS@.discussions.microsoft.com> wrote in message
> news:6D4CD763-FC56-4660-9097-8AC4E188D9AF@.microsoft.com...
>
>
Just to add more to what Kevin was getting at. Chances are (not 100%
guaranteed) that the closer you get to a full database the more the data
will be non-contiguous in the data files. When you reindex an index with
DBCC DBREINDEX or ALTER INDEX REBUILD the engine will create an entirely new
copy of the index in the file and then drop the old one when done. So first
off you need about 1.2 times the size of the index in free space just to
rebuild it. But if you only have that exact amount or anything close you
will most likely get the new index built in small fragments or pockets
within the data file(s) where ever there happened to be room. Where as with
plenty of free space the chances are much greater that you will have the
indexes built in a contiguous fashion. And if you do any range or table
scans, read - ahead's etc. this can make a big difference in performance.
Andrew J. Kelly SQL MVP
Solid Quality Mentors
"JDS" <JDS@.discussions.microsoft.com> wrote in message
news:F5291FF9-8516-437D-9053-3814FB5835B1@.microsoft.com...[vbcol=seagreen]
> Thanks Guys,
> I understand that if the database is at a point that it's forced to
> autogrow
> that will degrade performance while the database is growing. I want to
> know
> what the impact of a say 90-95% full database is. This is with out an
> autogrow situation. There will be updates, inserts, and reads but we
> never
> reach full capicity. Does a database say that is 60-75% full perfom
> better
> than a database that is 90-95% full or is there no known impact.
> "TheSQLGuru" wrote:

Data file size

Hi,
I brought this question up before, but I still need some help with it.
About a week or two ago, the data file for my SQL 2000 database was about
3.5 GB, with a 1 GB Transaction log. I back up the transaction log every 3
hours during the day and backup the Data file nightly.
I recently imported around 500,000 additional records into the database, and
the data file grew to about 5.5 GB, with a 8 GB Transaction log. I ran a
DBCC shrink and it reduced the data file to about 4 GB and the Tranaction Log
to about 3 GB.
I've done this numerous times already, and in another week, the data and
transaction log files will probably be right back up to 5.5 GB and 8 GB, and
probably even larger. Is this supposed to happen, or am I forgetting to do
something?
Any help would be terrific.
Thanks
-
Stu
Are you performing log backups? If not, make sure you have selected the
Simple recovery model.
David Portas
SQL Server MVP
|||Yes - The logs are backed up at 11:00, 2:00, and 5:00 every day.
"David Portas" wrote:

> Are you performing log backups? If not, make sure you have selected the
> Simple recovery model.
> --
> David Portas
> SQL Server MVP
> --
>
|||You need space in both the data and log files for the new data you are
inserting. If the files are not large enough they will grow. Depending on
the size and the growth percentage they may grow quite large. If you will
just repeat this process each week it makes no sense to shrink the files.
That does more harm than good and they will simply grow back again. Having
too much free space is not a problem but too little is.
http://www.karaszi.com/SQLServer/info_dont_shrink.asp Shrinking
considerations
Andrew J. Kelly SQL MVP
"Stu" <Stu@.discussions.microsoft.com> wrote in message
news:6BF3273A-E9DA-4434-B60A-0B8281854592@.microsoft.com...
> Hi,
> I brought this question up before, but I still need some help with it.
> About a week or two ago, the data file for my SQL 2000 database was about
> 3.5 GB, with a 1 GB Transaction log. I back up the transaction log every
> 3
> hours during the day and backup the Data file nightly.
> I recently imported around 500,000 additional records into the database,
> and
> the data file grew to about 5.5 GB, with a 8 GB Transaction log. I ran a
> DBCC shrink and it reduced the data file to about 4 GB and the Tranaction
> Log
> to about 3 GB.
> I've done this numerous times already, and in another week, the data and
> transaction log files will probably be right back up to 5.5 GB and 8 GB,
> and
> probably even larger. Is this supposed to happen, or am I forgetting to
> do
> something?
> Any help would be terrific.
> Thanks
> -
> Stu

Data File size

My datafile is at 80GB. The data used in the datafile is about 70GB and
growing, with approximately 4,000,000 records in a table that contains a
Text field. I have a weekly purge that deletes about 500,000 records. (Disk
space is at a premium.) When I delete the 500,000 records in the table, the
data used in the data file does not reduce, but remains the same and keeps
on growing. I need to keep the datafile comfortably below 80GB.
My understanding is that shrinking the data file is not the best thing to
do, but if I have to, then I will. So essentially, to keep the data file
comfortably below 80GB I would:
1. Perform the weekly purge (deleting approximately 500,000 records).
2. Shrink the datafile "dbcc shrinkfile ( filename, TRUNCATEONLY)"
Would the above 2 steps be the means of keeping my datafile comfortably
below 80GB?
Message posted via http://www.droptable.comYou can turn off auto-grow option and lock your
datafile size on 80Gb. Make sure you have enough space
for your transaction log though in order to be able to
delete records weekly.
Shrink is really not good operation especially on large DBs.
Regards.
"Robert Richards via droptable.com" wrote:

> My datafile is at 80GB. The data used in the datafile is about 70GB and
> growing, with approximately 4,000,000 records in a table that contains a
> Text field. I have a weekly purge that deletes about 500,000 records. (Dis
k
> space is at a premium.) When I delete the 500,000 records in the table, th
e
> data used in the data file does not reduce, but remains the same and keeps
> on growing. I need to keep the datafile comfortably below 80GB.
> My understanding is that shrinking the data file is not the best thing to
> do, but if I have to, then I will. So essentially, to keep the data file
> comfortably below 80GB I would:
> 1. Perform the weekly purge (deleting approximately 500,000 records).
> 2. Shrink the datafile "dbcc shrinkfile ( filename, TRUNCATEONLY)"
> Would the above 2 steps be the means of keeping my datafile comfortably
> below 80GB?
> --
> Message posted via http://www.droptable.com
>|||That does not sound like a solution, turning off auto grow. The reason why
is that at the current rate, the data file is growing and will eventually
fill up the 80GB. The purge is not reducing space in the data file.
Probably due to a high water mark. Therefore, even though I am keeping the
number of records at 4,000,000 or below, the unused space never gets
recovered, thus the datafile keeps growing even though the number of
records remains approximately the same.
I understand that ShrinkFile is not the best option, but what other option
is there?
Message posted via http://www.droptable.com|||Hi,
What you are performing is absolutely perfect. Do a weekly purge or move the
old data into a history database.
Since you are going to store the data back into the same file you may not
need to shrink the file because the space you
purged will be utilized for the new data which is coming in.
What is the main reason you need to keep 80GB as maximum for your database?
There are many databases in the globe with more than Terabytes
and running smoothly.
Thanks
Hari
Sql Server MVP
"Robert Richards via droptable.com" <forum@.nospam.droptable.com> wrote in
message news:0be9557414954ceca658ef750e5bb45e@.SQ
droptable.com...
> My datafile is at 80GB. The data used in the datafile is about 70GB and
> growing, with approximately 4,000,000 records in a table that contains a
> Text field. I have a weekly purge that deletes about 500,000 records.
> (Disk
> space is at a premium.) When I delete the 500,000 records in the table,
> the
> data used in the data file does not reduce, but remains the same and keeps
> on growing. I need to keep the datafile comfortably below 80GB.
> My understanding is that shrinking the data file is not the best thing to
> do, but if I have to, then I will. So essentially, to keep the data file
> comfortably below 80GB I would:
> 1. Perform the weekly purge (deleting approximately 500,000 records).
> 2. Shrink the datafile "dbcc shrinkfile ( filename, TRUNCATEONLY)"
> Would the above 2 steps be the means of keeping my datafile comfortably
> below 80GB?
> --
> Message posted via http://www.droptable.com|||I do not believe I am being understood. When I delete the 500,000, it is
not to an archive table. These 500,000 records are gone.
The 80GB max is due to available disk space.
When I delete records, I need to reclaim the unused space. Are the two
steps below a way of doing that?
1. Perform the weekly purge (deleting approximately 500,000 records).
2. Shrink the datafile "dbcc shrinkfile ( filename, TRUNCATEONLY)"
Is there a better way to reclaim unused space?
Message posted via http://www.droptable.com|||What you are seeing is mainly due to the fact you have text or image
columns. Even though you delete rows it may not always free up all the
space it previously used for the text columns. If you have a clustered
index on the table you can reindex the table and possibly free up some space
that may have been due to fragmentation of the non-text columns. But in
2000 there is nothing you can do to clean up text space usage short of
exporting all the data, truncating the table and importing it back in. SQL
2005 will allow you to reorganize blobs. Your real solution is to get more
disk space so you can deal with it better.
Andrew J. Kelly SQL MVP
"Robert Richards via droptable.com" <forum@.nospam.droptable.com> wrote in
message news:0be9557414954ceca658ef750e5bb45e@.SQ
droptable.com...
> My datafile is at 80GB. The data used in the datafile is about 70GB and
> growing, with approximately 4,000,000 records in a table that contains a
> Text field. I have a weekly purge that deletes about 500,000 records.
> (Disk
> space is at a premium.) When I delete the 500,000 records in the table,
> the
> data used in the data file does not reduce, but remains the same and keeps
> on growing. I need to keep the datafile comfortably below 80GB.
> My understanding is that shrinking the data file is not the best thing to
> do, but if I have to, then I will. So essentially, to keep the data file
> comfortably below 80GB I would:
> 1. Perform the weekly purge (deleting approximately 500,000 records).
> 2. Shrink the datafile "dbcc shrinkfile ( filename, TRUNCATEONLY)"
> Would the above 2 steps be the means of keeping my datafile comfortably
> below 80GB?
> --
> Message posted via http://www.droptable.com|||Why then does the "data used" decrease on the data file when I delete
records from the same table in my Test environment, but when I delete
records from the table in production the "data used" does not decrease?
Message posted via http://www.droptable.com|||How are you determining the "data used"? Are you using sp_spaceused? Have
you tried running DBCC UPDATE USAGE or specifying the UpdateUasge option?
Andrew J. Kelly SQL MVP
"Robert Richards via droptable.com" <forum@.droptable.com> wrote in message
news:ce78b9bd5c5447658b8a0e2badaf82cb@.SQ
droptable.com...
> Why then does the "data used" decrease on the data file when I delete
> records from the same table in my Test environment, but when I delete
> records from the table in production the "data used" does not decrease?
> --
> Message posted via http://www.droptable.com|||Also, the TRUCATE ONLY option will not Free Up the space becaue truncate
will only remove from the end of the file. You need to MOVE all the data to
the head of the file before shrinking. You can only do that if you use the
DBCC SHRINKFILE(MyDB_Data, TargetSize).
Reindexing the clustered index will most likely cause the file to grow even
more because the entire table is basically copied to a new when as it
reindexes. However, if you reindex, that will defragment by the index
order. Then if you shrink the file, it will move the data pages in
defragmented order, but could take quite some time, especially on a 80 GB
database.
You should consider creating 1 to 1 relationships for the LOB data and their
associated base tables. Then you could create VIEWs to replace the original
table definitions to minimize code impact. Then put all of the LOB data in
a seperate table(s). Those tables could then be created in seperate files
on seperate FileGroups. The shrinking and reordering just those files
should minimize the durations of those operations.
If you want to garauntee what you are looking at, run sp_spaceused
@.UPDATEUSAGE = 'true'.
Sincerely,
Anthony Thomas
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:ecOFGg9ZFHA.3168@.TK2MSFTNGP10.phx.gbl...
How are you determining the "data used"? Are you using sp_spaceused? Have
you tried running DBCC UPDATE USAGE or specifying the UpdateUasge option?
Andrew J. Kelly SQL MVP
"Robert Richards via droptable.com" <forum@.droptable.com> wrote in message
news:ce78b9bd5c5447658b8a0e2badaf82cb@.SQ
droptable.com...
> Why then does the "data used" decrease on the data file when I delete
> records from the same table in my Test environment, but when I delete
> records from the table in production the "data used" does not decrease?
> --
> Message posted via http://www.droptable.com|||I am still not clear how or why my data continues to increase. What you say
I cannot quite comprehend given my current data set. For instance:
5/21/2005
Records in table: 4,091,571
Data used within data file: 64,546.3 MB
6/5/2005
Records in table: 3,679,559
Data used within data file: 71,828.5 MB
So even though I have 412,012 less records in the table (the only other
tables are small, static lookup tables) the data within the data file has
grown 7,282.2 MB.
I guess I just do not understand the text column well enough, to understand
why the substantial growth, despite the significantly less records in the
table.
Please help, as I am going to have to explain this to my supervisor in such
a way to either justify more disk space, or resolve the growth issue
despite less records. Thanks for your patience.
Message posted via http://www.droptable.comsql

Data file size

Hi,
I brought this question up before, but I still need some help with it.
About a week or two ago, the data file for my SQL 2000 database was about
3.5 GB, with a 1 GB Transaction log. I back up the transaction log every 3
hours during the day and backup the Data file nightly.
I recently imported around 500,000 additional records into the database, and
the data file grew to about 5.5 GB, with a 8 GB Transaction log. I ran a
DBCC shrink and it reduced the data file to about 4 GB and the Tranaction Lo
g
to about 3 GB.
I've done this numerous times already, and in another week, the data and
transaction log files will probably be right back up to 5.5 GB and 8 GB, and
probably even larger. Is this supposed to happen, or am I forgetting to do
something?
Any help would be terrific.
Thanks
-
StuAre you performing log backups? If not, make sure you have selected the
Simple recovery model.
David Portas
SQL Server MVP
--|||Yes - The logs are backed up at 11:00, 2:00, and 5:00 every day.
"David Portas" wrote:

> Are you performing log backups? If not, make sure you have selected the
> Simple recovery model.
> --
> David Portas
> SQL Server MVP
> --
>|||You need space in both the data and log files for the new data you are
inserting. If the files are not large enough they will grow. Depending on
the size and the growth percentage they may grow quite large. If you will
just repeat this process each week it makes no sense to shrink the files.
That does more harm than good and they will simply grow back again. Having
too much free space is not a problem but too little is.
http://www.karaszi.com/SQLServer/info_dont_shrink.asp Shrinking
considerations
--
Andrew J. Kelly SQL MVP
"Stu" <Stu@.discussions.microsoft.com> wrote in message
news:6BF3273A-E9DA-4434-B60A-0B8281854592@.microsoft.com...
> Hi,
> I brought this question up before, but I still need some help with it.
> About a week or two ago, the data file for my SQL 2000 database was about
> 3.5 GB, with a 1 GB Transaction log. I back up the transaction log every
> 3
> hours during the day and backup the Data file nightly.
> I recently imported around 500,000 additional records into the database,
> and
> the data file grew to about 5.5 GB, with a 8 GB Transaction log. I ran a
> DBCC shrink and it reduced the data file to about 4 GB and the Tranaction
> Log
> to about 3 GB.
> I've done this numerous times already, and in another week, the data and
> transaction log files will probably be right back up to 5.5 GB and 8 GB,
> and
> probably even larger. Is this supposed to happen, or am I forgetting to
> do
> something?
> Any help would be terrific.
> Thanks
> -
> Stu

Data File Size

Does the % full of a datafile or a database impact performance?
For instance if I have a database that is 90-99% full will that have a
negative impact on performance are there any rules of thumb regarding this.
This is of course prior to the database being autmatically grown."JDS" <JDS@.discussions.microsoft.com> wrote in message
news:6D4CD763-FC56-4660-9097-8AC4E188D9AF@.microsoft.com...
> Does the % full of a datafile or a database impact performance?
> For instance if I have a database that is 90-99% full will that have a
> negative impact on performance are there any rules of thumb regarding
> this.
> This is of course prior to the database being autmatically grown.
It really depends on how the DB is being used.
If you're doing strictly queries, no.
If you're doing strictly inserts, then yes.
If you're doing strictly updates, then maybe.
If you're doing some mixture of above "it depends".
Obviously with inserts, you'll need more room once you hit 100% full. Before
then it probably won't matter.
If you're doing updates and the data fits in the same space as the current
data, you're probably fine. If you update a varchar(500) from a two
character entry to a 400 character entry, odds the DB will need to move
stuff around.
Greg Moore
SQL Server DBA Consulting Remote and Onsite available!
Email: sql (at) greenms.com http://www.greenms.com/sqlserver.html|||Best practice is to size your database proactively and only let autogrowth
act in 'emergency' cases. Size the database to allow for 1-2 years of
expected growth, and revisit at least every 6 months. This will allow for
maintenance operations like index rebuilds/reorgs to be able to lay data
down sequentially on disk for optimal performance and also avoid autogrowth
slowdowns during peak periods.
--
Kevin G. Boles
Indicium Resources, Inc.
SQL Server MVP
kgboles a earthlink dt net
"JDS" <JDS@.discussions.microsoft.com> wrote in message
news:6D4CD763-FC56-4660-9097-8AC4E188D9AF@.microsoft.com...
> Does the % full of a datafile or a database impact performance?
> For instance if I have a database that is 90-99% full will that have a
> negative impact on performance are there any rules of thumb regarding
> this.
> This is of course prior to the database being autmatically grown.|||Thanks Guys,
I understand that if the database is at a point that it's forced to autogrow
that will degrade performance while the database is growing. I want to know
what the impact of a say 90-95% full database is. This is with out an
autogrow situation. There will be updates, inserts, and reads but we never
reach full capicity. Does a database say that is 60-75% full perfom better
than a database that is 90-95% full or is there no known impact.
"TheSQLGuru" wrote:
> Best practice is to size your database proactively and only let autogrowth
> act in 'emergency' cases. Size the database to allow for 1-2 years of
> expected growth, and revisit at least every 6 months. This will allow for
> maintenance operations like index rebuilds/reorgs to be able to lay data
> down sequentially on disk for optimal performance and also avoid autogrowth
> slowdowns during peak periods.
> --
> Kevin G. Boles
> Indicium Resources, Inc.
> SQL Server MVP
> kgboles a earthlink dt net
>
> "JDS" <JDS@.discussions.microsoft.com> wrote in message
> news:6D4CD763-FC56-4660-9097-8AC4E188D9AF@.microsoft.com...
> > Does the % full of a datafile or a database impact performance?
> > For instance if I have a database that is 90-99% full will that have a
> > negative impact on performance are there any rules of thumb regarding
> > this.
> > This is of course prior to the database being autmatically grown.
>
>|||Just to add more to what Kevin was getting at. Chances are (not 100%
guaranteed) that the closer you get to a full database the more the data
will be non-contiguous in the data files. When you reindex an index with
DBCC DBREINDEX or ALTER INDEX REBUILD the engine will create an entirely new
copy of the index in the file and then drop the old one when done. So first
off you need about 1.2 times the size of the index in free space just to
rebuild it. But if you only have that exact amount or anything close you
will most likely get the new index built in small fragments or pockets
within the data file(s) where ever there happened to be room. Where as with
plenty of free space the chances are much greater that you will have the
indexes built in a contiguous fashion. And if you do any range or table
scans, read - ahead's etc. this can make a big difference in performance.
Andrew J. Kelly SQL MVP
Solid Quality Mentors
"JDS" <JDS@.discussions.microsoft.com> wrote in message
news:F5291FF9-8516-437D-9053-3814FB5835B1@.microsoft.com...
> Thanks Guys,
> I understand that if the database is at a point that it's forced to
> autogrow
> that will degrade performance while the database is growing. I want to
> know
> what the impact of a say 90-95% full database is. This is with out an
> autogrow situation. There will be updates, inserts, and reads but we
> never
> reach full capicity. Does a database say that is 60-75% full perfom
> better
> than a database that is 90-95% full or is there no known impact.
> "TheSQLGuru" wrote:
>> Best practice is to size your database proactively and only let
>> autogrowth
>> act in 'emergency' cases. Size the database to allow for 1-2 years of
>> expected growth, and revisit at least every 6 months. This will allow
>> for
>> maintenance operations like index rebuilds/reorgs to be able to lay data
>> down sequentially on disk for optimal performance and also avoid
>> autogrowth
>> slowdowns during peak periods.
>> --
>> Kevin G. Boles
>> Indicium Resources, Inc.
>> SQL Server MVP
>> kgboles a earthlink dt net
>>
>> "JDS" <JDS@.discussions.microsoft.com> wrote in message
>> news:6D4CD763-FC56-4660-9097-8AC4E188D9AF@.microsoft.com...
>> > Does the % full of a datafile or a database impact performance?
>> > For instance if I have a database that is 90-99% full will that have a
>> > negative impact on performance are there any rules of thumb regarding
>> > this.
>> > This is of course prior to the database being autmatically grown.
>>

Data file size

Hi,
I brought this question up before, but I still need some help with it.
About a week or two ago, the data file for my SQL 2000 database was about
3.5 GB, with a 1 GB Transaction log. I back up the transaction log every 3
hours during the day and backup the Data file nightly.
I recently imported around 500,000 additional records into the database, and
the data file grew to about 5.5 GB, with a 8 GB Transaction log. I ran a
DBCC shrink and it reduced the data file to about 4 GB and the Tranaction Log
to about 3 GB.
I've done this numerous times already, and in another week, the data and
transaction log files will probably be right back up to 5.5 GB and 8 GB, and
probably even larger. Is this supposed to happen, or am I forgetting to do
something?
Any help would be terrific.
Thanks
-
StuAre you performing log backups? If not, make sure you have selected the
Simple recovery model.
--
David Portas
SQL Server MVP
--|||Yes - The logs are backed up at 11:00, 2:00, and 5:00 every day.
"David Portas" wrote:
> Are you performing log backups? If not, make sure you have selected the
> Simple recovery model.
> --
> David Portas
> SQL Server MVP
> --
>|||You need space in both the data and log files for the new data you are
inserting. If the files are not large enough they will grow. Depending on
the size and the growth percentage they may grow quite large. If you will
just repeat this process each week it makes no sense to shrink the files.
That does more harm than good and they will simply grow back again. Having
too much free space is not a problem but too little is.
http://www.karaszi.com/SQLServer/info_dont_shrink.asp Shrinking
considerations
--
Andrew J. Kelly SQL MVP
"Stu" <Stu@.discussions.microsoft.com> wrote in message
news:6BF3273A-E9DA-4434-B60A-0B8281854592@.microsoft.com...
> Hi,
> I brought this question up before, but I still need some help with it.
> About a week or two ago, the data file for my SQL 2000 database was about
> 3.5 GB, with a 1 GB Transaction log. I back up the transaction log every
> 3
> hours during the day and backup the Data file nightly.
> I recently imported around 500,000 additional records into the database,
> and
> the data file grew to about 5.5 GB, with a 8 GB Transaction log. I ran a
> DBCC shrink and it reduced the data file to about 4 GB and the Tranaction
> Log
> to about 3 GB.
> I've done this numerous times already, and in another week, the data and
> transaction log files will probably be right back up to 5.5 GB and 8 GB,
> and
> probably even larger. Is this supposed to happen, or am I forgetting to
> do
> something?
> Any help would be terrific.
> Thanks
> -
> Stu

Data File size

My datafile is at 80GB. The data used in the datafile is about 70GB and
growing, with approximately 4,000,000 records in a table that contains a
Text field. I have a weekly purge that deletes about 500,000 records. (Disk
space is at a premium.) When I delete the 500,000 records in the table, the
data used in the data file does not reduce, but remains the same and keeps
on growing. I need to keep the datafile comfortably below 80GB.
My understanding is that shrinking the data file is not the best thing to
do, but if I have to, then I will. So essentially, to keep the data file
comfortably below 80GB I would:
1. Perform the weekly purge (deleting approximately 500,000 records).
2. Shrink the datafile "dbcc shrinkfile ( filename, TRUNCATEONLY)"
Would the above 2 steps be the means of keeping my datafile comfortably
below 80GB?
--
Message posted via http://www.sqlmonster.comYou can turn off auto-grow option and lock your
datafile size on 80Gb. Make sure you have enough space
for your transaction log though in order to be able to
delete records weekly.
Shrink is really not good operation especially on large DBs.
Regards.
"Robert Richards via SQLMonster.com" wrote:
> My datafile is at 80GB. The data used in the datafile is about 70GB and
> growing, with approximately 4,000,000 records in a table that contains a
> Text field. I have a weekly purge that deletes about 500,000 records. (Disk
> space is at a premium.) When I delete the 500,000 records in the table, the
> data used in the data file does not reduce, but remains the same and keeps
> on growing. I need to keep the datafile comfortably below 80GB.
> My understanding is that shrinking the data file is not the best thing to
> do, but if I have to, then I will. So essentially, to keep the data file
> comfortably below 80GB I would:
> 1. Perform the weekly purge (deleting approximately 500,000 records).
> 2. Shrink the datafile "dbcc shrinkfile ( filename, TRUNCATEONLY)"
> Would the above 2 steps be the means of keeping my datafile comfortably
> below 80GB?
> --
> Message posted via http://www.sqlmonster.com
>|||That does not sound like a solution, turning off auto grow. The reason why
is that at the current rate, the data file is growing and will eventually
fill up the 80GB. The purge is not reducing space in the data file.
Probably due to a high water mark. Therefore, even though I am keeping the
number of records at 4,000,000 or below, the unused space never gets
recovered, thus the datafile keeps growing even though the number of
records remains approximately the same.
I understand that ShrinkFile is not the best option, but what other option
is there?
--
Message posted via http://www.sqlmonster.com|||Hi,
What you are performing is absolutely perfect. Do a weekly purge or move the
old data into a history database.
Since you are going to store the data back into the same file you may not
need to shrink the file because the space you
purged will be utilized for the new data which is coming in.
What is the main reason you need to keep 80GB as maximum for your database?
There are many databases in the globe with more than Terabytes
and running smoothly.
Thanks
Hari
Sql Server MVP
"Robert Richards via SQLMonster.com" <forum@.nospam.SQLMonster.com> wrote in
message news:0be9557414954ceca658ef750e5bb45e@.SQLMonster.com...
> My datafile is at 80GB. The data used in the datafile is about 70GB and
> growing, with approximately 4,000,000 records in a table that contains a
> Text field. I have a weekly purge that deletes about 500,000 records.
> (Disk
> space is at a premium.) When I delete the 500,000 records in the table,
> the
> data used in the data file does not reduce, but remains the same and keeps
> on growing. I need to keep the datafile comfortably below 80GB.
> My understanding is that shrinking the data file is not the best thing to
> do, but if I have to, then I will. So essentially, to keep the data file
> comfortably below 80GB I would:
> 1. Perform the weekly purge (deleting approximately 500,000 records).
> 2. Shrink the datafile "dbcc shrinkfile ( filename, TRUNCATEONLY)"
> Would the above 2 steps be the means of keeping my datafile comfortably
> below 80GB?
> --
> Message posted via http://www.sqlmonster.com|||I do not believe I am being understood. When I delete the 500,000, it is
not to an archive table. These 500,000 records are gone.
The 80GB max is due to available disk space.
When I delete records, I need to reclaim the unused space. Are the two
steps below a way of doing that?
1. Perform the weekly purge (deleting approximately 500,000 records).
2. Shrink the datafile "dbcc shrinkfile ( filename, TRUNCATEONLY)"
Is there a better way to reclaim unused space?
--
Message posted via http://www.sqlmonster.com|||What you are seeing is mainly due to the fact you have text or image
columns. Even though you delete rows it may not always free up all the
space it previously used for the text columns. If you have a clustered
index on the table you can reindex the table and possibly free up some space
that may have been due to fragmentation of the non-text columns. But in
2000 there is nothing you can do to clean up text space usage short of
exporting all the data, truncating the table and importing it back in. SQL
2005 will allow you to reorganize blobs. Your real solution is to get more
disk space so you can deal with it better.
--
Andrew J. Kelly SQL MVP
"Robert Richards via SQLMonster.com" <forum@.nospam.SQLMonster.com> wrote in
message news:0be9557414954ceca658ef750e5bb45e@.SQLMonster.com...
> My datafile is at 80GB. The data used in the datafile is about 70GB and
> growing, with approximately 4,000,000 records in a table that contains a
> Text field. I have a weekly purge that deletes about 500,000 records.
> (Disk
> space is at a premium.) When I delete the 500,000 records in the table,
> the
> data used in the data file does not reduce, but remains the same and keeps
> on growing. I need to keep the datafile comfortably below 80GB.
> My understanding is that shrinking the data file is not the best thing to
> do, but if I have to, then I will. So essentially, to keep the data file
> comfortably below 80GB I would:
> 1. Perform the weekly purge (deleting approximately 500,000 records).
> 2. Shrink the datafile "dbcc shrinkfile ( filename, TRUNCATEONLY)"
> Would the above 2 steps be the means of keeping my datafile comfortably
> below 80GB?
> --
> Message posted via http://www.sqlmonster.com|||Why then does the "data used" decrease on the data file when I delete
records from the same table in my Test environment, but when I delete
records from the table in production the "data used" does not decrease?
--
Message posted via http://www.sqlmonster.com|||How are you determining the "data used"? Are you using sp_spaceused? Have
you tried running DBCC UPDATE USAGE or specifying the UpdateUasge option?
--
Andrew J. Kelly SQL MVP
"Robert Richards via SQLMonster.com" <forum@.SQLMonster.com> wrote in message
news:ce78b9bd5c5447658b8a0e2badaf82cb@.SQLMonster.com...
> Why then does the "data used" decrease on the data file when I delete
> records from the same table in my Test environment, but when I delete
> records from the table in production the "data used" does not decrease?
> --
> Message posted via http://www.sqlmonster.com|||Also, the TRUCATE ONLY option will not Free Up the space becaue truncate
will only remove from the end of the file. You need to MOVE all the data to
the head of the file before shrinking. You can only do that if you use the
DBCC SHRINKFILE(MyDB_Data, TargetSize).
Reindexing the clustered index will most likely cause the file to grow even
more because the entire table is basically copied to a new when as it
reindexes. However, if you reindex, that will defragment by the index
order. Then if you shrink the file, it will move the data pages in
defragmented order, but could take quite some time, especially on a 80 GB
database.
You should consider creating 1 to 1 relationships for the LOB data and their
associated base tables. Then you could create VIEWs to replace the original
table definitions to minimize code impact. Then put all of the LOB data in
a seperate table(s). Those tables could then be created in seperate files
on seperate FileGroups. The shrinking and reordering just those files
should minimize the durations of those operations.
If you want to garauntee what you are looking at, run sp_spaceused
@.UPDATEUSAGE = 'true'.
Sincerely,
Anthony Thomas
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:ecOFGg9ZFHA.3168@.TK2MSFTNGP10.phx.gbl...
How are you determining the "data used"? Are you using sp_spaceused? Have
you tried running DBCC UPDATE USAGE or specifying the UpdateUasge option?
--
Andrew J. Kelly SQL MVP
"Robert Richards via SQLMonster.com" <forum@.SQLMonster.com> wrote in message
news:ce78b9bd5c5447658b8a0e2badaf82cb@.SQLMonster.com...
> Why then does the "data used" decrease on the data file when I delete
> records from the same table in my Test environment, but when I delete
> records from the table in production the "data used" does not decrease?
> --
> Message posted via http://www.sqlmonster.com|||I am still not clear how or why my data continues to increase. What you say
I cannot quite comprehend given my current data set. For instance:
5/21/2005
Records in table: 4,091,571
Data used within data file: 64,546.3 MB
6/5/2005
Records in table: 3,679,559
Data used within data file: 71,828.5 MB
So even though I have 412,012 less records in the table (the only other
tables are small, static lookup tables) the data within the data file has
grown 7,282.2 MB.
I guess I just do not understand the text column well enough, to understand
why the substantial growth, despite the significantly less records in the
table.
Please help, as I am going to have to explain this to my supervisor in such
a way to either justify more disk space, or resolve the growth issue
despite less records. Thanks for your patience.
--
Message posted via http://www.sqlmonster.com|||Hi
Have a look at DBCC CLEANTABLE in BOL.
Regards
--
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
IM: mike@.epprecht.net
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"Robert Richards via SQLMonster.com" <forum@.SQLMonster.com> wrote in message
news:ce3ad79a66384035a99a9f369793c430@.SQLMonster.com...
>I am still not clear how or why my data continues to increase. What you say
> I cannot quite comprehend given my current data set. For instance:
> 5/21/2005
> Records in table: 4,091,571
> Data used within data file: 64,546.3 MB
> 6/5/2005
> Records in table: 3,679,559
> Data used within data file: 71,828.5 MB
> So even though I have 412,012 less records in the table (the only other
> tables are small, static lookup tables) the data within the data file has
> grown 7,282.2 MB.
> I guess I just do not understand the text column well enough, to
> understand
> why the substantial growth, despite the significantly less records in the
> table.
> Please help, as I am going to have to explain this to my supervisor in
> such
> a way to either justify more disk space, or resolve the growth issue
> despite less records. Thanks for your patience.
> --
> Message posted via http://www.sqlmonster.com|||An have you re-built your clustered index on the table?
Regards
--
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
IM: mike@.epprecht.net
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"Mike Epprecht (SQL MVP)" <mike@.epprecht.net> wrote in message
news:eVPWDahaFHA.2768@.tk2msftngp13.phx.gbl...
> Hi
> Have a look at DBCC CLEANTABLE in BOL.
> Regards
> --
> Mike Epprecht, Microsoft SQL Server MVP
> Zurich, Switzerland
> IM: mike@.epprecht.net
> MVP Program: http://www.microsoft.com/mvp
> Blog: http://www.msmvps.com/epprecht/
> "Robert Richards via SQLMonster.com" <forum@.SQLMonster.com> wrote in
> message news:ce3ad79a66384035a99a9f369793c430@.SQLMonster.com...
>>I am still not clear how or why my data continues to increase. What you
>>say
>> I cannot quite comprehend given my current data set. For instance:
>> 5/21/2005
>> Records in table: 4,091,571
>> Data used within data file: 64,546.3 MB
>> 6/5/2005
>> Records in table: 3,679,559
>> Data used within data file: 71,828.5 MB
>> So even though I have 412,012 less records in the table (the only other
>> tables are small, static lookup tables) the data within the data file has
>> grown 7,282.2 MB.
>> I guess I just do not understand the text column well enough, to
>> understand
>> why the substantial growth, despite the significantly less records in the
>> table.
>> Please help, as I am going to have to explain this to my supervisor in
>> such
>> a way to either justify more disk space, or resolve the growth issue
>> despite less records. Thanks for your patience.
>> --
>> Message posted via http://www.sqlmonster.com
>|||I am not sure if this is helpful but I have included the results of running
"dbcc showcontig" on the table:
DBCC SHOWCONTIG scanning [Table Name] table...
Table: [Table Name] (2009058193); index ID: 1, database ID: 7
TABLE level scan performed.
- Pages Scanned........................: 42259
- Extents Scanned.......................: 5292
- Extent Switches.......................: 5291
- Avg. Pages per Extent..................: 8.0
- Scan Density [Best Count:Actual Count]......: 99.83% [5283:5292]
- Logical Scan Fragmentation ..............: 0.00%
- Extent Scan Fragmentation ...............: 0.13%
- Avg. Bytes Free per Page................: 781.0
- Avg. Page Density (full)................: 90.35%
DBCC execution completed. If DBCC printed error messages, contact your
system administrator.
The following is from running dbcc checktable (run two days ago):
DBCC results for [Table Name].
There are 3880797 rows in 44707 pages for object [Table Name].
DBCC execution completed. If DBCC printed error messages, contact your
system administrator.
Lastly the following is from running dbcc checkalloc against the database:
DBCC results for [Database Name].
***************************************************************
Table sysobjects Object ID 1.
Index ID 1. FirstIAM (1:10). Root (1:11). Dpages 3.
Index ID 1. 5 pages used in 0 dedicated extents.
Index ID 2. FirstIAM (1:25). Root (1:15). Dpages 1.
Index ID 2. 2 pages used in 0 dedicated extents.
Index ID 3. FirstIAM (1:33). Root (1:31). Dpages 1.
Index ID 3. 2 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table sysindexes Object ID 2.
Index ID 1. FirstIAM (1:13). Root (1:14). Dpages 4.
Index ID 1. 6 pages used in 0 dedicated extents.
Index ID 255. FirstIAM (1:27). Root (1:64). Dpages 0.
Index ID 255. 37 pages used in 6 dedicated extents.
Total number of extents is 6.
***************************************************************
Table syscolumns Object ID 3.
Index ID 1. FirstIAM (1:26). Root (1:17). Dpages 12.
Index ID 1. 14 pages used in 1 dedicated extents.
Index ID 2. FirstIAM (1:35). Root (1:34). Dpages 5.
Index ID 2. 7 pages used in 0 dedicated extents.
Total number of extents is 1.
***************************************************************
Table systypes Object ID 4.
Index ID 1. FirstIAM (1:29). Root (1:30). Dpages 1.
Index ID 1. 3 pages used in 0 dedicated extents.
Index ID 2. FirstIAM (1:37). Root (1:36). Dpages 1.
Index ID 2. 2 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table syscomments Object ID 6.
Index ID 1. FirstIAM (1:52). Root (1:51). Dpages 24.
Index ID 1. 27 pages used in 3 dedicated extents.
Total number of extents is 3.
***************************************************************
Table sysfiles1 Object ID 8.
Index ID 0. FirstIAM (1:12). Root (1:32). Dpages 1.
Index ID 0. 2 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table syspermissions Object ID 9.
Index ID 1. FirstIAM (1:72). Root (1:63). Dpages 1.
Index ID 1. 3 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table sysusers Object ID 10.
Index ID 1. FirstIAM (1:39). Root (1:38). Dpages 1.
Index ID 1. 3 pages used in 0 dedicated extents.
Index ID 2. FirstIAM (1:42). Root (1:41). Dpages 1.
Index ID 2. 2 pages used in 0 dedicated extents.
Index ID 3. FirstIAM (1:44). Root (1:43). Dpages 1.
Index ID 3. 2 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table sysproperties Object ID 11.
Index ID 1. FirstIAM (0:0). Root (0:0). Dpages 0.
Index ID 1. 0 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table sysdepends Object ID 12.
Index ID 1. FirstIAM (1:55). Root (1:54). Dpages 3.
Index ID 1. 5 pages used in 0 dedicated extents.
Index ID 2. FirstIAM (1:58). Root (1:57). Dpages 2.
Index ID 2. 4 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table sysreferences Object ID 14.
Index ID 1. FirstIAM (0:0). Root (0:0). Dpages 0.
Index ID 1. 0 pages used in 0 dedicated extents.
Index ID 2. FirstIAM (0:0). Root (0:0). Dpages 0.
Index ID 2. 0 pages used in 0 dedicated extents.
Index ID 3. FirstIAM (0:0). Root (0:0). Dpages 0.
Index ID 3. 0 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table sysfulltextcatalogs Object ID 19.
Index ID 1. FirstIAM (0:0). Root (0:0). Dpages 0.
Index ID 1. 0 pages used in 0 dedicated extents.
Index ID 2. FirstIAM (0:0). Root (0:0). Dpages 0.
Index ID 2. 0 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table sysfulltextnotify Object ID 24.
Index ID 1. FirstIAM (0:0). Root (0:0). Dpages 0.
Index ID 1. 0 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table sysfilegroups Object ID 96.
Index ID 1. FirstIAM (1:47). Root (1:46). Dpages 1.
Index ID 1. 3 pages used in 0 dedicated extents.
Index ID 2. FirstIAM (1:50). Root (1:49). Dpages 1.
Index ID 2. 2 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table dtproperties Object ID 517576882.
Index ID 1. FirstIAM (0:0). Root (0:0). Dpages 0.
Index ID 1. 0 pages used in 0 dedicated extents.
Index ID 255. FirstIAM (0:0). Root (0:0). Dpages 0.
Index ID 255. 0 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table BTSError Object ID 1061578820.
Index ID 1. FirstIAM (1:1076526). Root (1:1145807). Dpages 18042.
Index ID 1. 18102 pages used in 2261 dedicated extents.
Index ID 4. FirstIAM (1:1076527). Root (1:1157874). Dpages 1933.
Index ID 4. 1957 pages used in 243 dedicated extents.
Index ID 5. FirstIAM (1:1159153). Root (1:1159579). Dpages 2242.
Index ID 5. 2274 pages used in 283 dedicated extents.
Index ID 255. FirstIAM (1:355450). Root (1:355449). Dpages 0.
Index ID 255. 88231 pages used in 11036 dedicated extents.
Total number of extents is 13823.
***************************************************************
Table DriveSpace Object ID 1333579789.
Index ID 1. FirstIAM (1:1159157). Root (1:1159158). Dpages 1.
Index ID 1. 3 pages used in 0 dedicated extents.
Index ID 2. FirstIAM (1:273386). Root (1:1159159). Dpages 1.
Index ID 2. 2 pages used in 0 dedicated extents.
Index ID 5. FirstIAM (1:273388). Root (1:273387). Dpages 1.
Index ID 5. 2 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table DBSize Object ID 1509580416.
Index ID 1. FirstIAM (1:1076523). Root (1:1076524). Dpages 5.
Index ID 1. 7 pages used in 0 dedicated extents.
Index ID 3. FirstIAM (1:1159155). Root (1:1076525). Dpages 1.
Index ID 3. 2 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table MessageCount Object ID 1669580986.
Index ID 1. FirstIAM (1:355452). Root (1:355448). Dpages 2.
Index ID 1. 4 pages used in 0 dedicated extents.
Index ID 2. FirstIAM (1:411009). Root (1:411008). Dpages 1.
Index ID 2. 2 pages used in 0 dedicated extents.
Index ID 3. FirstIAM (1:411012). Root (1:411010). Dpages 1.
Index ID 3. 2 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table IFSMessageTypes Object ID 1993058136.
Index ID 1. FirstIAM (1:618). Root (1:619). Dpages 1.
Index ID 1. 3 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table IFSMessages Object ID 2009058193.
Index ID 1. FirstIAM (1:432792). Root (1:9362263). Dpages 44700.
Index ID 1. 44780 pages used in 5597 dedicated extents.
Index ID 6. FirstIAM (1:432793). Root (1:9404245). Dpages 10083.
Index ID 6. 10119 pages used in 1266 dedicated extents.
Index ID 255. FirstIAM (1:268). Root (1:267). Dpages 0.
Index ID 255. 9062113 pages used in 1133279 dedicated extents.
Total number of extents is 1140142.
***************************************************************
Table IFSUpdateError Object ID 2025058250.
Index ID 1. FirstIAM (1:116). Root (1:1168777). Dpages 800.
Index ID 1. 821 pages used in 102 dedicated extents.
Index ID 255. FirstIAM (1:113). Root (1:112). Dpages 0.
Index ID 255. 487 pages used in 93 dedicated extents.
Total number of extents is 195.
***************************************************************
Processed 46 entries in sysindexes for database ID 7.
File 1. Number of extents = 1154232, used pages = 9229133, reserved pages =9233856.
File 1 (number of mixed extents = 40, mixed pages = 320).
Object ID 1, Index ID 0, data extents 0, pages 5, mixed extent pages 5.
Object ID 1, Index ID 2, index extents 0, pages 2, mixed extent pages 2.
Object ID 1, Index ID 3, index extents 0, pages 2, mixed extent pages 2.
Object ID 2, Index ID 0, data extents 0, pages 6, mixed extent pages 6.
Object ID 2, Index ID 255, index extents 6, pages 37, mixed extent
pages 22.
Object ID 3, Index ID 0, data extents 1, pages 14, mixed extent pages 9.
Object ID 3, Index ID 2, index extents 0, pages 7, mixed extent pages 7.
Object ID 4, Index ID 0, data extents 0, pages 3, mixed extent pages 3.
Object ID 4, Index ID 2, index extents 0, pages 2, mixed extent pages 2.
Object ID 6, Index ID 0, data extents 3, pages 27, mixed extent pages
10.
Object ID 8, Index ID 0, data extents 0, pages 2, mixed extent pages 2.
Object ID 9, Index ID 0, data extents 0, pages 3, mixed extent pages 3.
Object ID 10, Index ID 0, data extents 0, pages 3, mixed extent pages 3.
Object ID 10, Index ID 2, index extents 0, pages 2, mixed extent pages
2.
Object ID 10, Index ID 3, index extents 0, pages 2, mixed extent pages
2.
Object ID 12, Index ID 0, data extents 0, pages 5, mixed extent pages 5.
Object ID 12, Index ID 2, index extents 0, pages 4, mixed extent pages
4.
Object ID 96, Index ID 0, data extents 0, pages 3, mixed extent pages 3.
Object ID 96, Index ID 2, index extents 0, pages 2, mixed extent pages
2.
Object ID 99, Index ID 0, data extents 22, pages 91, mixed extent pages
1.
Object ID 1061578820, Index ID 0, data extents 2261, pages 18102, mixed
extent pages 26.
Object ID 1061578820, Index ID 4, index extents 243, pages 1957, mixed
extent pages 24.
Object ID 1061578820, Index ID 5, index extents 283, pages 2274, mixed
extent pages 26.
Object ID 1061578820, Index ID 255, index extents 11036, pages 88231,
mixed extent pages 28.
Object ID 1333579789, Index ID 0, data extents 0, pages 3, mixed extent
pages 3.
Object ID 1333579789, Index ID 2, index extents 0, pages 2, mixed
extent pages 2.
Object ID 1333579789, Index ID 5, index extents 0, pages 2, mixed
extent pages 2.
Object ID 1509580416, Index ID 0, data extents 0, pages 7, mixed extent
pages 7.
Object ID 1509580416, Index ID 3, index extents 0, pages 2, mixed
extent pages 2.
Object ID 1669580986, Index ID 0, data extents 0, pages 4, mixed extent
pages 4.
Object ID 1669580986, Index ID 2, index extents 0, pages 2, mixed
extent pages 2.
Object ID 1669580986, Index ID 3, index extents 0, pages 2, mixed
extent pages 2.
Object ID 1993058136, Index ID 0, data extents 0, pages 3, mixed extent
pages 3.
Object ID 2009058193, Index ID 0, data extents 5597, pages 44780, mixed
extent pages 10.
Object ID 2009058193, Index ID 6, index extents 1266, pages 10119,
mixed extent pages 10.
Object ID 2009058193, Index ID 255, index extents 1133279, pages
9062113, mixed extent pages 28.
Object ID 2025058250, Index ID 0, data extents 102, pages 821, mixed
extent pages 26.
Object ID 2025058250, Index ID 255, index extents 93, pages 487, mixed
extent pages 20.
Total number of extents = 1154232, used pages = 9229133, reserved pages =9233856 in this database.
(number of mixed extents = 40, mixed pages = 320) in this database.
CHECKALLOC found 0 allocation errors and 0 consistency errors in database
[Database Name].
DBCC execution completed. If DBCC printed error messages, contact your
system administrator.
--
Message posted via http://www.sqlmonster.com|||Well, certainly the bulk of your data resides in the LOB for the following
object:
Object ID 2009058193, Index ID 255, index extents 1133279, pages
9062113, mixed extent pages 28.
Another problem I see is the lack of usage for Clustered Indexes. Every
table should be defined with a Clustered Index. How to choose the
attributes for these indexes is a whole other discussion, and argument among
my peers here; however, the need for them can not be overstated, especially
when using LOBs.
I would certainly try to place one on Object ID 2009058193 to see if it
improves the storage.
Sincerely,
Anthony Thomas
"Robert Richards via SQLMonster.com" <forum@.SQLMonster.com> wrote in message
news:4a0b6d430d914267a3283d041ca13783@.SQLMonster.com...
I am not sure if this is helpful but I have included the results of running
"dbcc showcontig" on the table:
DBCC SHOWCONTIG scanning [Table Name] table...
Table: [Table Name] (2009058193); index ID: 1, database ID: 7
TABLE level scan performed.
- Pages Scanned........................: 42259
- Extents Scanned.......................: 5292
- Extent Switches.......................: 5291
- Avg. Pages per Extent..................: 8.0
- Scan Density [Best Count:Actual Count]......: 99.83% [5283:5292]
- Logical Scan Fragmentation ..............: 0.00%
- Extent Scan Fragmentation ...............: 0.13%
- Avg. Bytes Free per Page................: 781.0
- Avg. Page Density (full)................: 90.35%
DBCC execution completed. If DBCC printed error messages, contact your
system administrator.
The following is from running dbcc checktable (run two days ago):
DBCC results for [Table Name].
There are 3880797 rows in 44707 pages for object [Table Name].
DBCC execution completed. If DBCC printed error messages, contact your
system administrator.
Lastly the following is from running dbcc checkalloc against the database:
DBCC results for [Database Name].
***************************************************************
Table sysobjects Object ID 1.
Index ID 1. FirstIAM (1:10). Root (1:11). Dpages 3.
Index ID 1. 5 pages used in 0 dedicated extents.
Index ID 2. FirstIAM (1:25). Root (1:15). Dpages 1.
Index ID 2. 2 pages used in 0 dedicated extents.
Index ID 3. FirstIAM (1:33). Root (1:31). Dpages 1.
Index ID 3. 2 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table sysindexes Object ID 2.
Index ID 1. FirstIAM (1:13). Root (1:14). Dpages 4.
Index ID 1. 6 pages used in 0 dedicated extents.
Index ID 255. FirstIAM (1:27). Root (1:64). Dpages 0.
Index ID 255. 37 pages used in 6 dedicated extents.
Total number of extents is 6.
***************************************************************
Table syscolumns Object ID 3.
Index ID 1. FirstIAM (1:26). Root (1:17). Dpages 12.
Index ID 1. 14 pages used in 1 dedicated extents.
Index ID 2. FirstIAM (1:35). Root (1:34). Dpages 5.
Index ID 2. 7 pages used in 0 dedicated extents.
Total number of extents is 1.
***************************************************************
Table systypes Object ID 4.
Index ID 1. FirstIAM (1:29). Root (1:30). Dpages 1.
Index ID 1. 3 pages used in 0 dedicated extents.
Index ID 2. FirstIAM (1:37). Root (1:36). Dpages 1.
Index ID 2. 2 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table syscomments Object ID 6.
Index ID 1. FirstIAM (1:52). Root (1:51). Dpages 24.
Index ID 1. 27 pages used in 3 dedicated extents.
Total number of extents is 3.
***************************************************************
Table sysfiles1 Object ID 8.
Index ID 0. FirstIAM (1:12). Root (1:32). Dpages 1.
Index ID 0. 2 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table syspermissions Object ID 9.
Index ID 1. FirstIAM (1:72). Root (1:63). Dpages 1.
Index ID 1. 3 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table sysusers Object ID 10.
Index ID 1. FirstIAM (1:39). Root (1:38). Dpages 1.
Index ID 1. 3 pages used in 0 dedicated extents.
Index ID 2. FirstIAM (1:42). Root (1:41). Dpages 1.
Index ID 2. 2 pages used in 0 dedicated extents.
Index ID 3. FirstIAM (1:44). Root (1:43). Dpages 1.
Index ID 3. 2 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table sysproperties Object ID 11.
Index ID 1. FirstIAM (0:0). Root (0:0). Dpages 0.
Index ID 1. 0 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table sysdepends Object ID 12.
Index ID 1. FirstIAM (1:55). Root (1:54). Dpages 3.
Index ID 1. 5 pages used in 0 dedicated extents.
Index ID 2. FirstIAM (1:58). Root (1:57). Dpages 2.
Index ID 2. 4 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table sysreferences Object ID 14.
Index ID 1. FirstIAM (0:0). Root (0:0). Dpages 0.
Index ID 1. 0 pages used in 0 dedicated extents.
Index ID 2. FirstIAM (0:0). Root (0:0). Dpages 0.
Index ID 2. 0 pages used in 0 dedicated extents.
Index ID 3. FirstIAM (0:0). Root (0:0). Dpages 0.
Index ID 3. 0 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table sysfulltextcatalogs Object ID 19.
Index ID 1. FirstIAM (0:0). Root (0:0). Dpages 0.
Index ID 1. 0 pages used in 0 dedicated extents.
Index ID 2. FirstIAM (0:0). Root (0:0). Dpages 0.
Index ID 2. 0 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table sysfulltextnotify Object ID 24.
Index ID 1. FirstIAM (0:0). Root (0:0). Dpages 0.
Index ID 1. 0 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table sysfilegroups Object ID 96.
Index ID 1. FirstIAM (1:47). Root (1:46). Dpages 1.
Index ID 1. 3 pages used in 0 dedicated extents.
Index ID 2. FirstIAM (1:50). Root (1:49). Dpages 1.
Index ID 2. 2 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table dtproperties Object ID 517576882.
Index ID 1. FirstIAM (0:0). Root (0:0). Dpages 0.
Index ID 1. 0 pages used in 0 dedicated extents.
Index ID 255. FirstIAM (0:0). Root (0:0). Dpages 0.
Index ID 255. 0 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table BTSError Object ID 1061578820.
Index ID 1. FirstIAM (1:1076526). Root (1:1145807). Dpages 18042.
Index ID 1. 18102 pages used in 2261 dedicated extents.
Index ID 4. FirstIAM (1:1076527). Root (1:1157874). Dpages 1933.
Index ID 4. 1957 pages used in 243 dedicated extents.
Index ID 5. FirstIAM (1:1159153). Root (1:1159579). Dpages 2242.
Index ID 5. 2274 pages used in 283 dedicated extents.
Index ID 255. FirstIAM (1:355450). Root (1:355449). Dpages 0.
Index ID 255. 88231 pages used in 11036 dedicated extents.
Total number of extents is 13823.
***************************************************************
Table DriveSpace Object ID 1333579789.
Index ID 1. FirstIAM (1:1159157). Root (1:1159158). Dpages 1.
Index ID 1. 3 pages used in 0 dedicated extents.
Index ID 2. FirstIAM (1:273386). Root (1:1159159). Dpages 1.
Index ID 2. 2 pages used in 0 dedicated extents.
Index ID 5. FirstIAM (1:273388). Root (1:273387). Dpages 1.
Index ID 5. 2 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table DBSize Object ID 1509580416.
Index ID 1. FirstIAM (1:1076523). Root (1:1076524). Dpages 5.
Index ID 1. 7 pages used in 0 dedicated extents.
Index ID 3. FirstIAM (1:1159155). Root (1:1076525). Dpages 1.
Index ID 3. 2 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table MessageCount Object ID 1669580986.
Index ID 1. FirstIAM (1:355452). Root (1:355448). Dpages 2.
Index ID 1. 4 pages used in 0 dedicated extents.
Index ID 2. FirstIAM (1:411009). Root (1:411008). Dpages 1.
Index ID 2. 2 pages used in 0 dedicated extents.
Index ID 3. FirstIAM (1:411012). Root (1:411010). Dpages 1.
Index ID 3. 2 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table IFSMessageTypes Object ID 1993058136.
Index ID 1. FirstIAM (1:618). Root (1:619). Dpages 1.
Index ID 1. 3 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table IFSMessages Object ID 2009058193.
Index ID 1. FirstIAM (1:432792). Root (1:9362263). Dpages 44700.
Index ID 1. 44780 pages used in 5597 dedicated extents.
Index ID 6. FirstIAM (1:432793). Root (1:9404245). Dpages 10083.
Index ID 6. 10119 pages used in 1266 dedicated extents.
Index ID 255. FirstIAM (1:268). Root (1:267). Dpages 0.
Index ID 255. 9062113 pages used in 1133279 dedicated extents.
Total number of extents is 1140142.
***************************************************************
Table IFSUpdateError Object ID 2025058250.
Index ID 1. FirstIAM (1:116). Root (1:1168777). Dpages 800.
Index ID 1. 821 pages used in 102 dedicated extents.
Index ID 255. FirstIAM (1:113). Root (1:112). Dpages 0.
Index ID 255. 487 pages used in 93 dedicated extents.
Total number of extents is 195.
***************************************************************
Processed 46 entries in sysindexes for database ID 7.
File 1. Number of extents = 1154232, used pages = 9229133, reserved pages =9233856.
File 1 (number of mixed extents = 40, mixed pages = 320).
Object ID 1, Index ID 0, data extents 0, pages 5, mixed extent pages 5.
Object ID 1, Index ID 2, index extents 0, pages 2, mixed extent pages 2.
Object ID 1, Index ID 3, index extents 0, pages 2, mixed extent pages 2.
Object ID 2, Index ID 0, data extents 0, pages 6, mixed extent pages 6.
Object ID 2, Index ID 255, index extents 6, pages 37, mixed extent
pages 22.
Object ID 3, Index ID 0, data extents 1, pages 14, mixed extent pages 9.
Object ID 3, Index ID 2, index extents 0, pages 7, mixed extent pages 7.
Object ID 4, Index ID 0, data extents 0, pages 3, mixed extent pages 3.
Object ID 4, Index ID 2, index extents 0, pages 2, mixed extent pages 2.
Object ID 6, Index ID 0, data extents 3, pages 27, mixed extent pages
10.
Object ID 8, Index ID 0, data extents 0, pages 2, mixed extent pages 2.
Object ID 9, Index ID 0, data extents 0, pages 3, mixed extent pages 3.
Object ID 10, Index ID 0, data extents 0, pages 3, mixed extent pages 3.
Object ID 10, Index ID 2, index extents 0, pages 2, mixed extent pages
2.
Object ID 10, Index ID 3, index extents 0, pages 2, mixed extent pages
2.
Object ID 12, Index ID 0, data extents 0, pages 5, mixed extent pages 5.
Object ID 12, Index ID 2, index extents 0, pages 4, mixed extent pages
4.
Object ID 96, Index ID 0, data extents 0, pages 3, mixed extent pages 3.
Object ID 96, Index ID 2, index extents 0, pages 2, mixed extent pages
2.
Object ID 99, Index ID 0, data extents 22, pages 91, mixed extent pages
1.
Object ID 1061578820, Index ID 0, data extents 2261, pages 18102, mixed
extent pages 26.
Object ID 1061578820, Index ID 4, index extents 243, pages 1957, mixed
extent pages 24.
Object ID 1061578820, Index ID 5, index extents 283, pages 2274, mixed
extent pages 26.
Object ID 1061578820, Index ID 255, index extents 11036, pages 88231,
mixed extent pages 28.
Object ID 1333579789, Index ID 0, data extents 0, pages 3, mixed extent
pages 3.
Object ID 1333579789, Index ID 2, index extents 0, pages 2, mixed
extent pages 2.
Object ID 1333579789, Index ID 5, index extents 0, pages 2, mixed
extent pages 2.
Object ID 1509580416, Index ID 0, data extents 0, pages 7, mixed extent
pages 7.
Object ID 1509580416, Index ID 3, index extents 0, pages 2, mixed
extent pages 2.
Object ID 1669580986, Index ID 0, data extents 0, pages 4, mixed extent
pages 4.
Object ID 1669580986, Index ID 2, index extents 0, pages 2, mixed
extent pages 2.
Object ID 1669580986, Index ID 3, index extents 0, pages 2, mixed
extent pages 2.
Object ID 1993058136, Index ID 0, data extents 0, pages 3, mixed extent
pages 3.
Object ID 2009058193, Index ID 0, data extents 5597, pages 44780, mixed
extent pages 10.
Object ID 2009058193, Index ID 6, index extents 1266, pages 10119,
mixed extent pages 10.
Object ID 2009058193, Index ID 255, index extents 1133279, pages
9062113, mixed extent pages 28.
Object ID 2025058250, Index ID 0, data extents 102, pages 821, mixed
extent pages 26.
Object ID 2025058250, Index ID 255, index extents 93, pages 487, mixed
extent pages 20.
Total number of extents = 1154232, used pages = 9229133, reserved pages =9233856 in this database.
(number of mixed extents = 40, mixed pages = 320) in this database.
CHECKALLOC found 0 allocation errors and 0 consistency errors in database
[Database Name].
DBCC execution completed. If DBCC printed error messages, contact your
system administrator.
--
Message posted via http://www.sqlmonster.com|||I have a weekly job that executes the following:
dbcc dbreindex(N'[dbo].[Table Name]', N'', 0, sorted_data_reorg)
Does that rebuild the index?
--
Message posted via http://www.sqlmonster.com|||I checked out dbcc cleantable. It appears to only to have an affect should
by text column be dropped, as per what I read on BOL. Do you know if it
applies to deleting rows from a table that contains a Text column?
--
Message posted via http://www.sqlmonster.com|||This table has 8 rows, with a primary key as a clustered index.
How can I know what objectid refers to?
--
Message posted via http://www.sqlmonster.com|||This is a multi-part message in MIME format.
--050503020302070404020007
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
Not sure I understand the question. The DBCC CHECKTABLE statement you
ran said you have 3,880,797 rows in your table (not 8). To get an
object name from an object ID, change to the DB in question (in Query
Analyzer) and run
select object_name(2009058193)
where 2009058193 is the object ID. But you already know the table name,
so why the question? From the DBCC CHECKALLOC you posted:
***************************************************************
Table IFSMessages Object ID 2009058193.
Index ID 1. FirstIAM (1:432792). Root (1:9362263). Dpages 44700.
Index ID 1. 44780 pages used in 5597 dedicated extents.
Index ID 6. FirstIAM (1:432793). Root (1:9404245). Dpages 10083.
Index ID 6. 10119 pages used in 1266 dedicated extents.
Index ID 255. FirstIAM (1:268). Root (1:267). Dpages 0.
Index ID 255. 9062113 pages used in 1133279 dedicated extents.
Total number of extents is 1140142.
***************************************************************
you can see "index" 255 has the majority of the extents. That index
number refers to the text or image data associated with the table. So
you can see 5597 extents (0.5% of the space consumed by the table) are
attributable to the clustered index (i.e. this is the actual table data
excluding the LOBs). 1266 extents (0.1% of the space consumed by the
table) are attributable to the single nonclustered index on that table
(indid 6) and the remaining 1133279 extents (99.4% of the total space)
is due to the LOB data in the table (i.e. indid 255). Since an extent
is 64K (eight 8K pages) that means about 69.2G of the total data in your
database is attributable to your LOB data in the IFSMessages table.
--
*mike hodgson* |/ database administrator/ | mallesons stephen jaques
*T* +61 (2) 9296 3668 |* F* +61 (2) 9296 3885 |* M* +61 (408) 675 907
*E* mailto:mike.hodgson@.mallesons.nospam.com |* W* http://www.mallesons.com
Robert Richards via SQLMonster.com wrote:
>This table has 8 rows, with a primary key as a clustered index.
>How can I know what objectid refers to?
>
>
--050503020302070404020007
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type">
</head>
<body bgcolor="#ffffff" text="#000000">
<tt>Not sure I understand the question. The DBCC CHECKTABLE statement
you ran said you have 3,880,797 rows in your table (not 8). To get an
object name from an object ID, change to the DB in question (in Query
Analyzer) and run<br>
</tt>
<blockquote><tt>select object_name(</tt><tt>2009058193</tt><tt>)<br>
</tt></blockquote>
<tt>where 2009058193 is the object ID. But you already know the table
name, so why the question? From the DBCC CHECKALLOC you posted:<br>
</tt>
<blockquote>
<pre wrap="">***************************************************************
Table IFSMessages Object ID 2009058193.
Index ID 1. FirstIAM (1:432792). Root (1:9362263). Dpages 44700.
Index ID 1. 44780 pages used in 5597 dedicated extents.
Index ID 6. FirstIAM (1:432793). Root (1:9404245). Dpages 10083.
Index ID 6. 10119 pages used in 1266 dedicated extents.
Index ID 255. FirstIAM (1:268). Root (1:267). Dpages 0.
Index ID 255. 9062113 pages used in 1133279 dedicated extents.
Total number of extents is 1140142.
***************************************************************
</pre>
</blockquote>
<tt>you can see "index" 255 has the majority of the extents. That
index number refers to the text or image data associated with the
table. So you can see 5597 extents (0.5% of the space consumed by the
table) are attributable to the clustered index (i.e. this is the actual
table data excluding the LOBs). 1266 extents (0.1% of the space
consumed by the table) are attributable to the single nonclustered
index on that table (indid 6) and the remaining 1133279 extents (99.4%
of the total space) is due to the LOB data in the table (i.e. indid
255). Since an extent is 64K (eight 8K pages) that means about 69.2G
of the total data in your database is attributable to your LOB data in
the IFSMessages table.</tt><br>
<div class="moz-signature">
<title></title>
<meta http-equiv="Content-Type" content="text/html; ">
<p><span lang="en-au"><font face="Tahoma" size="2">--<br>
</font> </span><b><span lang="en-au"><font face="Tahoma" size="2">mike
hodgson</font></span></b><span lang="en-au"> <font face="Tahoma"
size="2">|</font><i><font face="Tahoma"> </font><font face="Tahoma"
size="2"> database administrator</font></i><font face="Tahoma" size="2">
| mallesons</font><font face="Tahoma"> </font><font face="Tahoma"
size="2">stephen</font><font face="Tahoma"> </font><font face="Tahoma"
size="2"> jaques</font><font face="Tahoma"><br>
</font><b><font face="Tahoma" size="2">T</font></b><font face="Tahoma"
size="2"> +61 (2) 9296 3668 |</font><b><font face="Tahoma"> </font><font
face="Tahoma" size="2"> F</font></b><font face="Tahoma" size="2"> +61
(2) 9296 3885 |</font><b><font face="Tahoma"> </font><font
face="Tahoma" size="2">M</font></b><font face="Tahoma" size="2"> +61
(408) 675 907</font><br>
<b><font face="Tahoma" size="2">E</font></b><font face="Tahoma" size="2">
<a href="http://links.10026.com/?link=mailto:mike.hodgson@.mallesons.nospam.com">
mailto:mike.hodgson@.mallesons.nospam.com</a> |</font><b><font
face="Tahoma"> </font><font face="Tahoma" size="2">W</font></b><font
face="Tahoma" size="2"> <a href="http://links.10026.com/?link=/">http://www.mallesons.com">
http://www.mallesons.com</a></font></span> </p>
</div>
<br>
<br>
Robert Richards via SQLMonster.com wrote:
<blockquote cite="mid3a0844e9a3c54080a0700d4b1565d3a7@.SQLMonster.com"
type="cite">
<pre wrap="">This table has 8 rows, with a primary key as a clustered index.
How can I know what objectid refers to?
</pre>
</blockquote>
</body>
</html>
--050503020302070404020007--|||This is a multi-part message in MIME format.
--020103030708070402090004
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
The DBCC SHOWCONTIG you did is only on the clustered index of the table
(index ID 1).
It will most likely take a long time to run (because it'll have to scan
about 70GB worth of pages/extents) but is it possible to get a DBCC
SHOWCONTIG on index 255 and post it in this thread to see the
fragmentation of the text data in that table? WARNING: It'd be best to
do this out of hours because it will probably take several hours to run
and will hold shared locks on the entire table while it's running, which
will cause blocking issues with data modification statements to that table.
--
*mike hodgson* |/ database administrator/ | mallesons stephen jaques
*T* +61 (2) 9296 3668 |* F* +61 (2) 9296 3885 |* M* +61 (408) 675 907
*E* mailto:mike.hodgson@.mallesons.nospam.com |* W* http://www.mallesons.com
Robert Richards via SQLMonster.com wrote:
>I am not sure if this is helpful but I have included the results of running
>"dbcc showcontig" on the table:
>DBCC SHOWCONTIG scanning [Table Name] table...
>Table: [Table Name] (2009058193); index ID: 1, database ID: 7
>TABLE level scan performed.
>- Pages Scanned........................: 42259
>- Extents Scanned.......................: 5292
>- Extent Switches.......................: 5291
>- Avg. Pages per Extent..................: 8.0
>- Scan Density [Best Count:Actual Count]......: 99.83% [5283:5292]
>- Logical Scan Fragmentation ..............: 0.00%
>- Extent Scan Fragmentation ...............: 0.13%
>- Avg. Bytes Free per Page................: 781.0
>- Avg. Page Density (full)................: 90.35%
>DBCC execution completed. If DBCC printed error messages, contact your
>system administrator.
>The following is from running dbcc checktable (run two days ago):
>DBCC results for [Table Name].
>There are 3880797 rows in 44707 pages for object [Table Name].
>DBCC execution completed. If DBCC printed error messages, contact your
>system administrator.
>Lastly the following is from running dbcc checkalloc against the database:
>DBCC results for [Database Name].
>***************************************************************
>Table sysobjects Object ID 1.
>Index ID 1. FirstIAM (1:10). Root (1:11). Dpages 3.
>Index ID 1. 5 pages used in 0 dedicated extents.
>Index ID 2. FirstIAM (1:25). Root (1:15). Dpages 1.
>Index ID 2. 2 pages used in 0 dedicated extents.
>Index ID 3. FirstIAM (1:33). Root (1:31). Dpages 1.
>Index ID 3. 2 pages used in 0 dedicated extents.
>Total number of extents is 0.
>***************************************************************
>Table sysindexes Object ID 2.
>Index ID 1. FirstIAM (1:13). Root (1:14). Dpages 4.
>Index ID 1. 6 pages used in 0 dedicated extents.
>Index ID 255. FirstIAM (1:27). Root (1:64). Dpages 0.
>Index ID 255. 37 pages used in 6 dedicated extents.
>Total number of extents is 6.
>***************************************************************
>Table syscolumns Object ID 3.
>Index ID 1. FirstIAM (1:26). Root (1:17). Dpages 12.
>Index ID 1. 14 pages used in 1 dedicated extents.
>Index ID 2. FirstIAM (1:35). Root (1:34). Dpages 5.
>Index ID 2. 7 pages used in 0 dedicated extents.
>Total number of extents is 1.
>***************************************************************
>Table systypes Object ID 4.
>Index ID 1. FirstIAM (1:29). Root (1:30). Dpages 1.
>Index ID 1. 3 pages used in 0 dedicated extents.
>Index ID 2. FirstIAM (1:37). Root (1:36). Dpages 1.
>Index ID 2. 2 pages used in 0 dedicated extents.
>Total number of extents is 0.
>***************************************************************
>Table syscomments Object ID 6.
>Index ID 1. FirstIAM (1:52). Root (1:51). Dpages 24.
>Index ID 1. 27 pages used in 3 dedicated extents.
>Total number of extents is 3.
>***************************************************************
>Table sysfiles1 Object ID 8.
>Index ID 0. FirstIAM (1:12). Root (1:32). Dpages 1.
>Index ID 0. 2 pages used in 0 dedicated extents.
>Total number of extents is 0.
>***************************************************************
>Table syspermissions Object ID 9.
>Index ID 1. FirstIAM (1:72). Root (1:63). Dpages 1.
>Index ID 1. 3 pages used in 0 dedicated extents.
>Total number of extents is 0.
>***************************************************************
>Table sysusers Object ID 10.
>Index ID 1. FirstIAM (1:39). Root (1:38). Dpages 1.
>Index ID 1. 3 pages used in 0 dedicated extents.
>Index ID 2. FirstIAM (1:42). Root (1:41). Dpages 1.
>Index ID 2. 2 pages used in 0 dedicated extents.
>Index ID 3. FirstIAM (1:44). Root (1:43). Dpages 1.
>Index ID 3. 2 pages used in 0 dedicated extents.
>Total number of extents is 0.
>***************************************************************
>Table sysproperties Object ID 11.
>Index ID 1. FirstIAM (0:0). Root (0:0). Dpages 0.
>Index ID 1. 0 pages used in 0 dedicated extents.
>Total number of extents is 0.
>***************************************************************
>Table sysdepends Object ID 12.
>Index ID 1. FirstIAM (1:55). Root (1:54). Dpages 3.
>Index ID 1. 5 pages used in 0 dedicated extents.
>Index ID 2. FirstIAM (1:58). Root (1:57). Dpages 2.
>Index ID 2. 4 pages used in 0 dedicated extents.
>Total number of extents is 0.
>***************************************************************
>Table sysreferences Object ID 14.
>Index ID 1. FirstIAM (0:0). Root (0:0). Dpages 0.
>Index ID 1. 0 pages used in 0 dedicated extents.
>Index ID 2. FirstIAM (0:0). Root (0:0). Dpages 0.
>Index ID 2. 0 pages used in 0 dedicated extents.
>Index ID 3. FirstIAM (0:0). Root (0:0). Dpages 0.
>Index ID 3. 0 pages used in 0 dedicated extents.
>Total number of extents is 0.
>***************************************************************
>Table sysfulltextcatalogs Object ID 19.
>Index ID 1. FirstIAM (0:0). Root (0:0). Dpages 0.
>Index ID 1. 0 pages used in 0 dedicated extents.
>Index ID 2. FirstIAM (0:0). Root (0:0). Dpages 0.
>Index ID 2. 0 pages used in 0 dedicated extents.
>Total number of extents is 0.
>***************************************************************
>Table sysfulltextnotify Object ID 24.
>Index ID 1. FirstIAM (0:0). Root (0:0). Dpages 0.
>Index ID 1. 0 pages used in 0 dedicated extents.
>Total number of extents is 0.
>***************************************************************
>Table sysfilegroups Object ID 96.
>Index ID 1. FirstIAM (1:47). Root (1:46). Dpages 1.
>Index ID 1. 3 pages used in 0 dedicated extents.
>Index ID 2. FirstIAM (1:50). Root (1:49). Dpages 1.
>Index ID 2. 2 pages used in 0 dedicated extents.
>Total number of extents is 0.
>***************************************************************
>Table dtproperties Object ID 517576882.
>Index ID 1. FirstIAM (0:0). Root (0:0). Dpages 0.
>Index ID 1. 0 pages used in 0 dedicated extents.
>Index ID 255. FirstIAM (0:0). Root (0:0). Dpages 0.
>Index ID 255. 0 pages used in 0 dedicated extents.
>Total number of extents is 0.
>***************************************************************
>Table BTSError Object ID 1061578820.
>Index ID 1. FirstIAM (1:1076526). Root (1:1145807). Dpages 18042.
>Index ID 1. 18102 pages used in 2261 dedicated extents.
>Index ID 4. FirstIAM (1:1076527). Root (1:1157874). Dpages 1933.
>Index ID 4. 1957 pages used in 243 dedicated extents.
>Index ID 5. FirstIAM (1:1159153). Root (1:1159579). Dpages 2242.
>Index ID 5. 2274 pages used in 283 dedicated extents.
>Index ID 255. FirstIAM (1:355450). Root (1:355449). Dpages 0.
>Index ID 255. 88231 pages used in 11036 dedicated extents.
>Total number of extents is 13823.
>***************************************************************
>Table DriveSpace Object ID 1333579789.
>Index ID 1. FirstIAM (1:1159157). Root (1:1159158). Dpages 1.
>Index ID 1. 3 pages used in 0 dedicated extents.
>Index ID 2. FirstIAM (1:273386). Root (1:1159159). Dpages 1.
>Index ID 2. 2 pages used in 0 dedicated extents.
>Index ID 5. FirstIAM (1:273388). Root (1:273387). Dpages 1.
>Index ID 5. 2 pages used in 0 dedicated extents.
>Total number of extents is 0.
>***************************************************************
>Table DBSize Object ID 1509580416.
>Index ID 1. FirstIAM (1:1076523). Root (1:1076524). Dpages 5.
>Index ID 1. 7 pages used in 0 dedicated extents.
>Index ID 3. FirstIAM (1:1159155). Root (1:1076525). Dpages 1.
>Index ID 3. 2 pages used in 0 dedicated extents.
>Total number of extents is 0.
>***************************************************************
>Table MessageCount Object ID 1669580986.
>Index ID 1. FirstIAM (1:355452). Root (1:355448). Dpages 2.
>Index ID 1. 4 pages used in 0 dedicated extents.
>Index ID 2. FirstIAM (1:411009). Root (1:411008). Dpages 1.
>Index ID 2. 2 pages used in 0 dedicated extents.
>Index ID 3. FirstIAM (1:411012). Root (1:411010). Dpages 1.
>Index ID 3. 2 pages used in 0 dedicated extents.
>Total number of extents is 0.
>***************************************************************
>Table IFSMessageTypes Object ID 1993058136.
>Index ID 1. FirstIAM (1:618). Root (1:619). Dpages 1.
>Index ID 1. 3 pages used in 0 dedicated extents.
>Total number of extents is 0.
>***************************************************************
>Table IFSMessages Object ID 2009058193.
>Index ID 1. FirstIAM (1:432792). Root (1:9362263). Dpages 44700.
>Index ID 1. 44780 pages used in 5597 dedicated extents.
>Index ID 6. FirstIAM (1:432793). Root (1:9404245). Dpages 10083.
>Index ID 6. 10119 pages used in 1266 dedicated extents.
>Index ID 255. FirstIAM (1:268). Root (1:267). Dpages 0.
>Index ID 255. 9062113 pages used in 1133279 dedicated extents.
>Total number of extents is 1140142.
>***************************************************************
>Table IFSUpdateError Object ID 2025058250.
>Index ID 1. FirstIAM (1:116). Root (1:1168777). Dpages 800.
>Index ID 1. 821 pages used in 102 dedicated extents.
>Index ID 255. FirstIAM (1:113). Root (1:112). Dpages 0.
>Index ID 255. 487 pages used in 93 dedicated extents.
>Total number of extents is 195.
>***************************************************************
>Processed 46 entries in sysindexes for database ID 7.
>File 1. Number of extents = 1154232, used pages = 9229133, reserved pages =>9233856.
> File 1 (number of mixed extents = 40, mixed pages = 320).
> Object ID 1, Index ID 0, data extents 0, pages 5, mixed extent pages 5.
> Object ID 1, Index ID 2, index extents 0, pages 2, mixed extent pages 2.
> Object ID 1, Index ID 3, index extents 0, pages 2, mixed extent pages 2.
> Object ID 2, Index ID 0, data extents 0, pages 6, mixed extent pages 6.
> Object ID 2, Index ID 255, index extents 6, pages 37, mixed extent
>pages 22.
> Object ID 3, Index ID 0, data extents 1, pages 14, mixed extent pages 9.
> Object ID 3, Index ID 2, index extents 0, pages 7, mixed extent pages 7.
> Object ID 4, Index ID 0, data extents 0, pages 3, mixed extent pages 3.
> Object ID 4, Index ID 2, index extents 0, pages 2, mixed extent pages 2.
> Object ID 6, Index ID 0, data extents 3, pages 27, mixed extent pages
>10.
> Object ID 8, Index ID 0, data extents 0, pages 2, mixed extent pages 2.
> Object ID 9, Index ID 0, data extents 0, pages 3, mixed extent pages 3.
> Object ID 10, Index ID 0, data extents 0, pages 3, mixed extent pages 3.
> Object ID 10, Index ID 2, index extents 0, pages 2, mixed extent pages
>2.
> Object ID 10, Index ID 3, index extents 0, pages 2, mixed extent pages
>2.
> Object ID 12, Index ID 0, data extents 0, pages 5, mixed extent pages 5.
> Object ID 12, Index ID 2, index extents 0, pages 4, mixed extent pages
>4.
> Object ID 96, Index ID 0, data extents 0, pages 3, mixed extent pages 3.
> Object ID 96, Index ID 2, index extents 0, pages 2, mixed extent pages
>2.
> Object ID 99, Index ID 0, data extents 22, pages 91, mixed extent pages
>1.
> Object ID 1061578820, Index ID 0, data extents 2261, pages 18102, mixed
>extent pages 26.
> Object ID 1061578820, Index ID 4, index extents 243, pages 1957, mixed
>extent pages 24.
> Object ID 1061578820, Index ID 5, index extents 283, pages 2274, mixed
>extent pages 26.
> Object ID 1061578820, Index ID 255, index extents 11036, pages 88231,
>mixed extent pages 28.
> Object ID 1333579789, Index ID 0, data extents 0, pages 3, mixed extent
>pages 3.
> Object ID 1333579789, Index ID 2, index extents 0, pages 2, mixed
>extent pages 2.
> Object ID 1333579789, Index ID 5, index extents 0, pages 2, mixed
>extent pages 2.
> Object ID 1509580416, Index ID 0, data extents 0, pages 7, mixed extent
>pages 7.
> Object ID 1509580416, Index ID 3, index extents 0, pages 2, mixed
>extent pages 2.
> Object ID 1669580986, Index ID 0, data extents 0, pages 4, mixed extent
>pages 4.
> Object ID 1669580986, Index ID 2, index extents 0, pages 2, mixed
>extent pages 2.
> Object ID 1669580986, Index ID 3, index extents 0, pages 2, mixed
>extent pages 2.
> Object ID 1993058136, Index ID 0, data extents 0, pages 3, mixed extent
>pages 3.
> Object ID 2009058193, Index ID 0, data extents 5597, pages 44780, mixed
>extent pages 10.
> Object ID 2009058193, Index ID 6, index extents 1266, pages 10119,
>mixed extent pages 10.
> Object ID 2009058193, Index ID 255, index extents 1133279, pages
>9062113, mixed extent pages 28.
> Object ID 2025058250, Index ID 0, data extents 102, pages 821, mixed
>extent pages 26.
> Object ID 2025058250, Index ID 255, index extents 93, pages 487, mixed
>extent pages 20.
>Total number of extents = 1154232, used pages = 9229133, reserved pages =>9233856 in this database.
> (number of mixed extents = 40, mixed pages = 320) in this database.
>CHECKALLOC found 0 allocation errors and 0 consistency errors in database
>[Database Name].
>DBCC execution completed. If DBCC printed error messages, contact your
>system administrator.
>
>
--020103030708070402090004
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type">
</head>
<body bgcolor="#ffffff" text="#000000">
<tt>The DBCC SHOWCONTIG you did is only on the clustered index of the
table (index ID 1).<br>
<br>
It will most likely take a long time to run (because it'll have to scan
about 70GB worth of pages/extents) but is it possible to get a DBCC
SHOWCONTIG on index 255 and post it in this thread to see the
fragmentation of the text data in that table? WARNING: It'd be best to
do this out of hours because it will probably take several hours to run
and will hold shared locks on the entire table while it's running,
which will cause blocking issues with data modification statements to
that table.</tt><br>
<div class="moz-signature">
<title></title>
<meta http-equiv="Content-Type" content="text/html; ">
<p><span lang="en-au"><font face="Tahoma" size="2">--<br>
</font> </span><b><span lang="en-au"><font face="Tahoma" size="2">mike
hodgson</font></span></b><span lang="en-au"> <font face="Tahoma"
size="2">|</font><i><font face="Tahoma"> </font><font face="Tahoma"
size="2"> database administrator</font></i><font face="Tahoma" size="2">
| mallesons</font><font face="Tahoma"> </font><font face="Tahoma"
size="2">stephen</font><font face="Tahoma"> </font><font face="Tahoma"
size="2"> jaques</font><font face="Tahoma"><br>
</font><b><font face="Tahoma" size="2">T</font></b><font face="Tahoma"
size="2"> +61 (2) 9296 3668 |</font><b><font face="Tahoma"> </font><font
face="Tahoma" size="2"> F</font></b><font face="Tahoma" size="2"> +61
(2) 9296 3885 |</font><b><font face="Tahoma"> </font><font
face="Tahoma" size="2">M</font></b><font face="Tahoma" size="2"> +61
(408) 675 907</font><br>
<b><font face="Tahoma" size="2">E</font></b><font face="Tahoma" size="2">
<a href="http://links.10026.com/?link=mailto:mike.hodgson@.mallesons.nospam.com">
mailto:mike.hodgson@.mallesons.nospam.com</a> |</font><b><font
face="Tahoma"> </font><font face="Tahoma" size="2">W</font></b><font
face="Tahoma" size="2"> <a href="http://links.10026.com/?link=/">http://www.mallesons.com">
http://www.mallesons.com</a></font></span> </p>
</div>
<br>
<br>
Robert Richards via SQLMonster.com wrote:
<blockquote cite="mid4a0b6d430d914267a3283d041ca13783@.SQLMonster.com"
type="cite">
<pre wrap="">I am not sure if this is helpful but I have included the results of running
"dbcc showcontig" on the table:
DBCC SHOWCONTIG scanning [Table Name] table...
Table: [Table Name] (2009058193); index ID: 1, database ID: 7
TABLE level scan performed.
- Pages Scanned........................: 42259
- Extents Scanned.......................: 5292
- Extent Switches.......................: 5291
- Avg. Pages per Extent..................: 8.0
- Scan Density [Best Count:Actual Count]......: 99.83% [5283:5292]
- Logical Scan Fragmentation ..............: 0.00%
- Extent Scan Fragmentation ...............: 0.13%
- Avg. Bytes Free per Page................: 781.0
- Avg. Page Density (full)................: 90.35%
DBCC execution completed. If DBCC printed error messages, contact your
system administrator.
The following is from running dbcc checktable (run two days ago):
DBCC results for [Table Name].
There are 3880797 rows in 44707 pages for object [Table Name].
DBCC execution completed. If DBCC printed error messages, contact your
system administrator.
Lastly the following is from running dbcc checkalloc against the database:
DBCC results for [Database Name].
***************************************************************
Table sysobjects Object ID 1.
Index ID 1. FirstIAM (1:10). Root (1:11). Dpages 3.
Index ID 1. 5 pages used in 0 dedicated extents.
Index ID 2. FirstIAM (1:25). Root (1:15). Dpages 1.
Index ID 2. 2 pages used in 0 dedicated extents.
Index ID 3. FirstIAM (1:33). Root (1:31). Dpages 1.
Index ID 3. 2 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table sysindexes Object ID 2.
Index ID 1. FirstIAM (1:13). Root (1:14). Dpages 4.
Index ID 1. 6 pages used in 0 dedicated extents.
Index ID 255. FirstIAM (1:27). Root (1:64). Dpages 0.
Index ID 255. 37 pages used in 6 dedicated extents.
Total number of extents is 6.
***************************************************************
Table syscolumns Object ID 3.
Index ID 1. FirstIAM (1:26). Root (1:17). Dpages 12.
Index ID 1. 14 pages used in 1 dedicated extents.
Index ID 2. FirstIAM (1:35). Root (1:34). Dpages 5.
Index ID 2. 7 pages used in 0 dedicated extents.
Total number of extents is 1.
***************************************************************
Table systypes Object ID 4.
Index ID 1. FirstIAM (1:29). Root (1:30). Dpages 1.
Index ID 1. 3 pages used in 0 dedicated extents.
Index ID 2. FirstIAM (1:37). Root (1:36). Dpages 1.
Index ID 2. 2 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table syscomments Object ID 6.
Index ID 1. FirstIAM (1:52). Root (1:51). Dpages 24.
Index ID 1. 27 pages used in 3 dedicated extents.
Total number of extents is 3.
***************************************************************
Table sysfiles1 Object ID 8.
Index ID 0. FirstIAM (1:12). Root (1:32). Dpages 1.
Index ID 0. 2 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table syspermissions Object ID 9.
Index ID 1. FirstIAM (1:72). Root (1:63). Dpages 1.
Index ID 1. 3 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table sysusers Object ID 10.
Index ID 1. FirstIAM (1:39). Root (1:38). Dpages 1.
Index ID 1. 3 pages used in 0 dedicated extents.
Index ID 2. FirstIAM (1:42). Root (1:41). Dpages 1.
Index ID 2. 2 pages used in 0 dedicated extents.
Index ID 3. FirstIAM (1:44). Root (1:43). Dpages 1.
Index ID 3. 2 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table sysproperties Object ID 11.
Index ID 1. FirstIAM (0:0). Root (0:0). Dpages 0.
Index ID 1. 0 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table sysdepends Object ID 12.
Index ID 1. FirstIAM (1:55). Root (1:54). Dpages 3.
Index ID 1. 5 pages used in 0 dedicated extents.
Index ID 2. FirstIAM (1:58). Root (1:57). Dpages 2.
Index ID 2. 4 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table sysreferences Object ID 14.
Index ID 1. FirstIAM (0:0). Root (0:0). Dpages 0.
Index ID 1. 0 pages used in 0 dedicated extents.
Index ID 2. FirstIAM (0:0). Root (0:0). Dpages 0.
Index ID 2. 0 pages used in 0 dedicated extents.
Index ID 3. FirstIAM (0:0). Root (0:0). Dpages 0.
Index ID 3. 0 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table sysfulltextcatalogs Object ID 19.
Index ID 1. FirstIAM (0:0). Root (0:0). Dpages 0.
Index ID 1. 0 pages used in 0 dedicated extents.
Index ID 2. FirstIAM (0:0). Root (0:0). Dpages 0.
Index ID 2. 0 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table sysfulltextnotify Object ID 24.
Index ID 1. FirstIAM (0:0). Root (0:0). Dpages 0.
Index ID 1. 0 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table sysfilegroups Object ID 96.
Index ID 1. FirstIAM (1:47). Root (1:46). Dpages 1.
Index ID 1. 3 pages used in 0 dedicated extents.
Index ID 2. FirstIAM (1:50). Root (1:49). Dpages 1.
Index ID 2. 2 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table dtproperties Object ID 517576882.
Index ID 1. FirstIAM (0:0). Root (0:0). Dpages 0.
Index ID 1. 0 pages used in 0 dedicated extents.
Index ID 255. FirstIAM (0:0). Root (0:0). Dpages 0.
Index ID 255. 0 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table BTSError Object ID 1061578820.
Index ID 1. FirstIAM (1:1076526). Root (1:1145807). Dpages 18042.
Index ID 1. 18102 pages used in 2261 dedicated extents.
Index ID 4. FirstIAM (1:1076527). Root (1:1157874). Dpages 1933.
Index ID 4. 1957 pages used in 243 dedicated extents.
Index ID 5. FirstIAM (1:1159153). Root (1:1159579). Dpages 2242.
Index ID 5. 2274 pages used in 283 dedicated extents.
Index ID 255. FirstIAM (1:355450). Root (1:355449). Dpages 0.
Index ID 255. 88231 pages used in 11036 dedicated extents.
Total number of extents is 13823.
***************************************************************
Table DriveSpace Object ID 1333579789.
Index ID 1. FirstIAM (1:1159157). Root (1:1159158). Dpages 1.
Index ID 1. 3 pages used in 0 dedicated extents.
Index ID 2. FirstIAM (1:273386). Root (1:1159159). Dpages 1.
Index ID 2. 2 pages used in 0 dedicated extents.
Index ID 5. FirstIAM (1:273388). Root (1:273387). Dpages 1.
Index ID 5. 2 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table DBSize Object ID 1509580416.
Index ID 1. FirstIAM (1:1076523). Root (1:1076524). Dpages 5.
Index ID 1. 7 pages used in 0 dedicated extents.
Index ID 3. FirstIAM (1:1159155). Root (1:1076525). Dpages 1.
Index ID 3. 2 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table MessageCount Object ID 1669580986.
Index ID 1. FirstIAM (1:355452). Root (1:355448). Dpages 2.
Index ID 1. 4 pages used in 0 dedicated extents.
Index ID 2. FirstIAM (1:411009). Root (1:411008). Dpages 1.
Index ID 2. 2 pages used in 0 dedicated extents.
Index ID 3. FirstIAM (1:411012). Root (1:411010). Dpages 1.
Index ID 3. 2 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table IFSMessageTypes Object ID 1993058136.
Index ID 1. FirstIAM (1:618). Root (1:619). Dpages 1.
Index ID 1. 3 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table IFSMessages Object ID 2009058193.
Index ID 1. FirstIAM (1:432792). Root (1:9362263). Dpages 44700.
Index ID 1. 44780 pages used in 5597 dedicated extents.
Index ID 6. FirstIAM (1:432793). Root (1:9404245). Dpages 10083.
Index ID 6. 10119 pages used in 1266 dedicated extents.
Index ID 255. FirstIAM (1:268). Root (1:267). Dpages 0.
Index ID 255. 9062113 pages used in 1133279 dedicated extents.
Total number of extents is 1140142.
***************************************************************
Table IFSUpdateError Object ID 2025058250.
Index ID 1. FirstIAM (1:116). Root (1:1168777). Dpages 800.
Index ID 1. 821 pages used in 102 dedicated extents.
Index ID 255. FirstIAM (1:113). Root (1:112). Dpages 0.
Index ID 255. 487 pages used in 93 dedicated extents.
Total number of extents is 195.
***************************************************************
Processed 46 entries in sysindexes for database ID 7.
File 1. Number of extents = 1154232, used pages = 9229133, reserved pages =9233856.
File 1 (number of mixed extents = 40, mixed pages = 320).
Object ID 1, Index ID 0, data extents 0, pages 5, mixed extent pages 5.
Object ID 1, Index ID 2, index extents 0, pages 2, mixed extent pages 2.
Object ID 1, Index ID 3, index extents 0, pages 2, mixed extent pages 2.
Object ID 2, Index ID 0, data extents 0, pages 6, mixed extent pages 6.
Object ID 2, Index ID 255, index extents 6, pages 37, mixed extent
pages 22.
Object ID 3, Index ID 0, data extents 1, pages 14, mixed extent pages 9.
Object ID 3, Index ID 2, index extents 0, pages 7, mixed extent pages 7.
Object ID 4, Index ID 0, data extents 0, pages 3, mixed extent pages 3.
Object ID 4, Index ID 2, index extents 0, pages 2, mixed extent pages 2.
Object ID 6, Index ID 0, data extents 3, pages 27, mixed extent pages
10.
Object ID 8, Index ID 0, data extents 0, pages 2, mixed extent pages 2.
Object ID 9, Index ID 0, data extents 0, pages 3, mixed extent pages 3.
Object ID 10, Index ID 0, data extents 0, pages 3, mixed extent pages 3.
Object ID 10, Index ID 2, index extents 0, pages 2, mixed extent pages
2.
Object ID 10, Index ID 3, index extents 0, pages 2, mixed extent pages
2.
Object ID 12, Index ID 0, data extents 0, pages 5, mixed extent pages 5.
Object ID 12, Index ID 2, index extents 0, pages 4, mixed extent pages
4.
Object ID 96, Index ID 0, data extents 0, pages 3, mixed extent pages 3.
Object ID 96, Index ID 2, index extents 0, pages 2, mixed extent pages
2.
Object ID 99, Index ID 0, data extents 22, pages 91, mixed extent pages
1.
Object ID 1061578820, Index ID 0, data extents 2261, pages 18102, mixed
extent pages 26.
Object ID 1061578820, Index ID 4, index extents 243, pages 1957, mixed
extent pages 24.
Object ID 1061578820, Index ID 5, index extents 283, pages 2274, mixed
extent pages 26.
Object ID 1061578820, Index ID 255, index extents 11036, pages 88231,
mixed extent pages 28.
Object ID 1333579789, Index ID 0, data extents 0, pages 3, mixed extent
pages 3.
Object ID 1333579789, Index ID 2, index extents 0, pages 2, mixed
extent pages 2.
Object ID 1333579789, Index ID 5, index extents 0, pages 2, mixed
extent pages 2.
Object ID 1509580416, Index ID 0, data extents 0, pages 7, mixed extent
pages 7.
Object ID 1509580416, Index ID 3, index extents 0, pages 2, mixed
extent pages 2.
Object ID 1669580986, Index ID 0, data extents 0, pages 4, mixed extent
pages 4.
Object ID 1669580986, Index ID 2, index extents 0, pages 2, mixed
extent pages 2.
Object ID 1669580986, Index ID 3, index extents 0, pages 2, mixed
extent pages 2.
Object ID 1993058136, Index ID 0, data extents 0, pages 3, mixed extent
pages 3.
Object ID 2009058193, Index ID 0, data extents 5597, pages 44780, mixed
extent pages 10.
Object ID 2009058193, Index ID 6, index extents 1266, pages 10119,
mixed extent pages 10.
Object ID 2009058193, Index ID 255, index extents 1133279, pages
9062113, mixed extent pages 28.
Object ID 2025058250, Index ID 0, data extents 102, pages 821, mixed
extent pages 26.
Object ID 2025058250, Index ID 255, index extents 93, pages 487, mixed
extent pages 20.
Total number of extents = 1154232, used pages = 9229133, reserved pages =9233856 in this database.
(number of mixed extents = 40, mixed pages = 320) in this database.
CHECKALLOC found 0 allocation errors and 0 consistency errors in database
[Database Name].
DBCC execution completed. If DBCC printed error messages, contact your
system administrator.
</pre>
</blockquote>
</body>
</html>
--020103030708070402090004--|||That rebuilds all indexes against that one specific table.
Sincerely,
Anthony Thomas
"Robert Richards via SQLMonster.com" <forum@.SQLMonster.com> wrote in message
news:fdb75573b04e4e2fba5eb109094771e4@.SQLMonster.com...
I have a weekly job that executes the following:
dbcc dbreindex(N'[dbo].[Table Name]', N'', 0, sorted_data_reorg)
Does that rebuild the index?
--
Message posted via http://www.sqlmonster.com|||In Query Analyzer, switch to the database this refers to then execute the
following:
SELECT OBJECT_NAME(object_id)
This will return the name of the object this id belongs to. Or,
SELECT [name]
FROM sysobjects
WHERE id = <object_id>
Sincerely,
Anthony Thomas
"Robert Richards via SQLMonster.com" <forum@.SQLMonster.com> wrote in message
news:3a0844e9a3c54080a0700d4b1565d3a7@.SQLMonster.com...
This table has 8 rows, with a primary key as a clustered index.
How can I know what objectid refers to?
--
Message posted via http://www.sqlmonster.com|||So if I have a clustered index on this table (a primary key on a identity
seed which is IndexId = 1), then how can I reduce extent usage on the LOB?
--
Message posted via http://www.sqlmonster.com|||This is a multi-part message in MIME format.
--020900020706080309080505
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
You can't. The clustered index has nothing to do with the text data
fragmentation (i.e. extent usage) in the table. The text data appears as
index 255. The clustered index displays as index 1.
Besides, you already have a clustered index in that table (index ID 1).
If it was a heap then there would be no index 1 and instead there would
be an index ID 0 (the unsorted table data).
Does anyone know if you can defrag the text data with "DBCC INDEXDEFRAG
(7, 2009058193, *255*)" in order to reclaim LOB space? I suspect you
can't but I've never tried before and I can't see any mention of it in
BOL (mentions clustered & nonclustered indexes but not index 255).
--
*mike hodgson* |/ database administrator/ | mallesons stephen jaques
*T* +61 (2) 9296 3668 |* F* +61 (2) 9296 3885 |* M* +61 (408) 675 907
*E* mailto:mike.hodgson@.mallesons.nospam.com |* W* http://www.mallesons.com
Robert Richards via SQLMonster.com wrote:
>So if I have a clustered index on this table (a primary key on a identity
>seed which is IndexId = 1), then how can I reduce extent usage on the LOB?
>
>
--020900020706080309080505
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type">
</head>
<body bgcolor="#ffffff" text="#000000">
<tt>You can't. The clustered index has nothing to do with the text
data fragmentation (i.e. extent usage) in the table. The text data
appears as index 255. The clustered index displays as index 1.<br>
<br>
Besides, you already have a clustered index in that table (index ID
1). If it was a heap then there would be no index 1 and instead there
would be an index ID 0 (the unsorted table data).<br>
<br>
Does anyone know if you can defrag the text data with "DBCC INDEXDEFRAG
(7, 2009058193, <b>255</b>)" in order to reclaim LOB space? I suspect
you can't but I've never tried before and I can't see any mention of it
in BOL (mentions clustered & nonclustered indexes but not index
255).<br>
</tt>
<div class="moz-signature">
<title></title>
<meta http-equiv="Content-Type" content="text/html; ">
<p><span lang="en-au"><font face="Tahoma" size="2">--<br>
</font> </span><b><span lang="en-au"><font face="Tahoma" size="2">mike
hodgson</font></span></b><span lang="en-au"> <font face="Tahoma"
size="2">|</font><i><font face="Tahoma"> </font><font face="Tahoma"
size="2"> database administrator</font></i><font face="Tahoma" size="2">
| mallesons</font><font face="Tahoma"> </font><font face="Tahoma"
size="2">stephen</font><font face="Tahoma"> </font><font face="Tahoma"
size="2"> jaques</font><font face="Tahoma"><br>
</font><b><font face="Tahoma" size="2">T</font></b><font face="Tahoma"
size="2"> +61 (2) 9296 3668 |</font><b><font face="Tahoma"> </font><font
face="Tahoma" size="2"> F</font></b><font face="Tahoma" size="2"> +61
(2) 9296 3885 |</font><b><font face="Tahoma"> </font><font
face="Tahoma" size="2">M</font></b><font face="Tahoma" size="2"> +61
(408) 675 907</font><br>
<b><font face="Tahoma" size="2">E</font></b><font face="Tahoma" size="2">
<a href="http://links.10026.com/?link=mailto:mike.hodgson@.mallesons.nospam.com">
mailto:mike.hodgson@.mallesons.nospam.com</a> |</font><b><font
face="Tahoma"> </font><font face="Tahoma" size="2">W</font></b><font
face="Tahoma" size="2"> <a href="http://links.10026.com/?link=/">http://www.mallesons.com">
http://www.mallesons.com</a></font></span> </p>
</div>
<br>
<br>
Robert Richards via SQLMonster.com wrote:
<blockquote cite="mid32cdee166af64d06b136fe16b1ba9630@.SQLMonster.com"
type="cite">
<pre wrap="">So if I have a clustered index on this table (a primary key on a identity
seed which is IndexId = 1), then how can I reduce extent usage on the LOB?
</pre>
</blockquote>
</body>
</html>
--020900020706080309080505--|||My apologies, but I commented on the lack of a clustered index on this table
but clearly the output shows one. I inadvertenly scanned through all the
output attempting to find the large set and misinterpreted the last part of
the CHECKALLOC output.
DBCC SHOWCONTIG scanning [Table Name] table...
Table: [Table Name] (2009058193); index ID: 1, database ID: 7
TABLE level scan performed.
- Pages Scanned........................: 42259
- Extents Scanned.......................: 5292
- Extent Switches.......................: 5291
- Avg. Pages per Extent..................: 8.0
- Scan Density [Best Count:Actual Count]......: 99.83% [5283:5292]
- Logical Scan Fragmentation ..............: 0.00%
- Extent Scan Fragmentation ...............: 0.13%
- Avg. Bytes Free per Page................: 781.0
- Avg. Page Density (full)................: 90.35%
DBCC execution completed. If DBCC printed error messages, contact your
system administrator.
***************************************************************
Table IFSMessages Object ID 2009058193.
Index ID 1. FirstIAM (1:432792). Root (1:9362263). Dpages 44700.
Index ID 1. 44780 pages used in 5597 dedicated extents.
Index ID 6. FirstIAM (1:432793). Root (1:9404245). Dpages 10083.
Index ID 6. 10119 pages used in 1266 dedicated extents.
Index ID 255. FirstIAM (1:268). Root (1:267). Dpages 0.
Index ID 255. 9062113 pages used in 1133279 dedicated extents.
Total number of extents is 1140142.
***************************************************************
Object ID 2009058193, Index ID 0, data extents 5597, pages 44780, mixed
extent pages 10.
Object ID 2009058193, Index ID 6, index extents 1266, pages 10119,
mixed extent pages 10.
Object ID 2009058193, Index ID 255, index extents 1133279, pages
9062113, mixed extent pages 28.
What does bother me though is that not only does the LOB comprise the bulk
of the data, it is also sitting in the head of the file (FirstIAM(1:268)).
Somewhere in this thread I know I recommended seperating you LOBs out to a
seperate 1 to 1 related table, placed on a dedicated filegroup, with
multiple dedicated files. This would certainly clear this up, at least
once. But at this point I'm still not sure that we can even verify that the
LOB data is fragmented at all.
Sincerely,
Anthony Thomas
"Anthony Thomas" <ALThomas@.kc.rr.com> wrote in message
news:%23UaN8CiaFHA.2900@.TK2MSFTNGP15.phx.gbl...
Well, certainly the bulk of your data resides in the LOB for the following
object:
Object ID 2009058193, Index ID 255, index extents 1133279, pages
9062113, mixed extent pages 28.
Another problem I see is the lack of usage for Clustered Indexes. Every
table should be defined with a Clustered Index. How to choose the
attributes for these indexes is a whole other discussion, and argument among
my peers here; however, the need for them can not be overstated, especially
when using LOBs.
I would certainly try to place one on Object ID 2009058193 to see if it
improves the storage.
Sincerely,
Anthony Thomas
"Robert Richards via SQLMonster.com" <forum@.SQLMonster.com> wrote in message
news:4a0b6d430d914267a3283d041ca13783@.SQLMonster.com...
I am not sure if this is helpful but I have included the results of running
"dbcc showcontig" on the table:
DBCC SHOWCONTIG scanning [Table Name] table...
Table: [Table Name] (2009058193); index ID: 1, database ID: 7
TABLE level scan performed.
- Pages Scanned........................: 42259
- Extents Scanned.......................: 5292
- Extent Switches.......................: 5291
- Avg. Pages per Extent..................: 8.0
- Scan Density [Best Count:Actual Count]......: 99.83% [5283:5292]
- Logical Scan Fragmentation ..............: 0.00%
- Extent Scan Fragmentation ...............: 0.13%
- Avg. Bytes Free per Page................: 781.0
- Avg. Page Density (full)................: 90.35%
DBCC execution completed. If DBCC printed error messages, contact your
system administrator.
The following is from running dbcc checktable (run two days ago):
DBCC results for [Table Name].
There are 3880797 rows in 44707 pages for object [Table Name].
DBCC execution completed. If DBCC printed error messages, contact your
system administrator.
Lastly the following is from running dbcc checkalloc against the database:
DBCC results for [Database Name].
***************************************************************
Table sysobjects Object ID 1.
Index ID 1. FirstIAM (1:10). Root (1:11). Dpages 3.
Index ID 1. 5 pages used in 0 dedicated extents.
Index ID 2. FirstIAM (1:25). Root (1:15). Dpages 1.
Index ID 2. 2 pages used in 0 dedicated extents.
Index ID 3. FirstIAM (1:33). Root (1:31). Dpages 1.
Index ID 3. 2 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table sysindexes Object ID 2.
Index ID 1. FirstIAM (1:13). Root (1:14). Dpages 4.
Index ID 1. 6 pages used in 0 dedicated extents.
Index ID 255. FirstIAM (1:27). Root (1:64). Dpages 0.
Index ID 255. 37 pages used in 6 dedicated extents.
Total number of extents is 6.
***************************************************************
Table syscolumns Object ID 3.
Index ID 1. FirstIAM (1:26). Root (1:17). Dpages 12.
Index ID 1. 14 pages used in 1 dedicated extents.
Index ID 2. FirstIAM (1:35). Root (1:34). Dpages 5.
Index ID 2. 7 pages used in 0 dedicated extents.
Total number of extents is 1.
***************************************************************
Table systypes Object ID 4.
Index ID 1. FirstIAM (1:29). Root (1:30). Dpages 1.
Index ID 1. 3 pages used in 0 dedicated extents.
Index ID 2. FirstIAM (1:37). Root (1:36). Dpages 1.
Index ID 2. 2 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table syscomments Object ID 6.
Index ID 1. FirstIAM (1:52). Root (1:51). Dpages 24.
Index ID 1. 27 pages used in 3 dedicated extents.
Total number of extents is 3.
***************************************************************
Table sysfiles1 Object ID 8.
Index ID 0. FirstIAM (1:12). Root (1:32). Dpages 1.
Index ID 0. 2 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table syspermissions Object ID 9.
Index ID 1. FirstIAM (1:72). Root (1:63). Dpages 1.
Index ID 1. 3 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table sysusers Object ID 10.
Index ID 1. FirstIAM (1:39). Root (1:38). Dpages 1.
Index ID 1. 3 pages used in 0 dedicated extents.
Index ID 2. FirstIAM (1:42). Root (1:41). Dpages 1.
Index ID 2. 2 pages used in 0 dedicated extents.
Index ID 3. FirstIAM (1:44). Root (1:43). Dpages 1.
Index ID 3. 2 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table sysproperties Object ID 11.
Index ID 1. FirstIAM (0:0). Root (0:0). Dpages 0.
Index ID 1. 0 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table sysdepends Object ID 12.
Index ID 1. FirstIAM (1:55). Root (1:54). Dpages 3.
Index ID 1. 5 pages used in 0 dedicated extents.
Index ID 2. FirstIAM (1:58). Root (1:57). Dpages 2.
Index ID 2. 4 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table sysreferences Object ID 14.
Index ID 1. FirstIAM (0:0). Root (0:0). Dpages 0.
Index ID 1. 0 pages used in 0 dedicated extents.
Index ID 2. FirstIAM (0:0). Root (0:0). Dpages 0.
Index ID 2. 0 pages used in 0 dedicated extents.
Index ID 3. FirstIAM (0:0). Root (0:0). Dpages 0.
Index ID 3. 0 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table sysfulltextcatalogs Object ID 19.
Index ID 1. FirstIAM (0:0). Root (0:0). Dpages 0.
Index ID 1. 0 pages used in 0 dedicated extents.
Index ID 2. FirstIAM (0:0). Root (0:0). Dpages 0.
Index ID 2. 0 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table sysfulltextnotify Object ID 24.
Index ID 1. FirstIAM (0:0). Root (0:0). Dpages 0.
Index ID 1. 0 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table sysfilegroups Object ID 96.
Index ID 1. FirstIAM (1:47). Root (1:46). Dpages 1.
Index ID 1. 3 pages used in 0 dedicated extents.
Index ID 2. FirstIAM (1:50). Root (1:49). Dpages 1.
Index ID 2. 2 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table dtproperties Object ID 517576882.
Index ID 1. FirstIAM (0:0). Root (0:0). Dpages 0.
Index ID 1. 0 pages used in 0 dedicated extents.
Index ID 255. FirstIAM (0:0). Root (0:0). Dpages 0.
Index ID 255. 0 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table BTSError Object ID 1061578820.
Index ID 1. FirstIAM (1:1076526). Root (1:1145807). Dpages 18042.
Index ID 1. 18102 pages used in 2261 dedicated extents.
Index ID 4. FirstIAM (1:1076527). Root (1:1157874). Dpages 1933.
Index ID 4. 1957 pages used in 243 dedicated extents.
Index ID 5. FirstIAM (1:1159153). Root (1:1159579). Dpages 2242.
Index ID 5. 2274 pages used in 283 dedicated extents.
Index ID 255. FirstIAM (1:355450). Root (1:355449). Dpages 0.
Index ID 255. 88231 pages used in 11036 dedicated extents.
Total number of extents is 13823.
***************************************************************
Table DriveSpace Object ID 1333579789.
Index ID 1. FirstIAM (1:1159157). Root (1:1159158). Dpages 1.
Index ID 1. 3 pages used in 0 dedicated extents.
Index ID 2. FirstIAM (1:273386). Root (1:1159159). Dpages 1.
Index ID 2. 2 pages used in 0 dedicated extents.
Index ID 5. FirstIAM (1:273388). Root (1:273387). Dpages 1.
Index ID 5. 2 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table DBSize Object ID 1509580416.
Index ID 1. FirstIAM (1:1076523). Root (1:1076524). Dpages 5.
Index ID 1. 7 pages used in 0 dedicated extents.
Index ID 3. FirstIAM (1:1159155). Root (1:1076525). Dpages 1.
Index ID 3. 2 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table MessageCount Object ID 1669580986.
Index ID 1. FirstIAM (1:355452). Root (1:355448). Dpages 2.
Index ID 1. 4 pages used in 0 dedicated extents.
Index ID 2. FirstIAM (1:411009). Root (1:411008). Dpages 1.
Index ID 2. 2 pages used in 0 dedicated extents.
Index ID 3. FirstIAM (1:411012). Root (1:411010). Dpages 1.
Index ID 3. 2 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table IFSMessageTypes Object ID 1993058136.
Index ID 1. FirstIAM (1:618). Root (1:619). Dpages 1.
Index ID 1. 3 pages used in 0 dedicated extents.
Total number of extents is 0.
***************************************************************
Table IFSMessages Object ID 2009058193.
Index ID 1. FirstIAM (1:432792). Root (1:9362263). Dpages 44700.
Index ID 1. 44780 pages used in 5597 dedicated extents.
Index ID 6. FirstIAM (1:432793). Root (1:9404245). Dpages 10083.
Index ID 6. 10119 pages used in 1266 dedicated extents.
Index ID 255. FirstIAM (1:268). Root (1:267). Dpages 0.
Index ID 255. 9062113 pages used in 1133279 dedicated extents.
Total number of extents is 1140142.
***************************************************************
Table IFSUpdateError Object ID 2025058250.
Index ID 1. FirstIAM (1:116). Root (1:1168777). Dpages 800.
Index ID 1. 821 pages used in 102 dedicated extents.
Index ID 255. FirstIAM (1:113). Root (1:112). Dpages 0.
Index ID 255. 487 pages used in 93 dedicated extents.
Total number of extents is 195.
***************************************************************
Processed 46 entries in sysindexes for database ID 7.
File 1. Number of extents = 1154232, used pages = 9229133, reserved pages =9233856.
File 1 (number of mixed extents = 40, mixed pages = 320).
Object ID 1, Index ID 0, data extents 0, pages 5, mixed extent pages 5.
Object ID 1, Index ID 2, index extents 0, pages 2, mixed extent pages 2.
Object ID 1, Index ID 3, index extents 0, pages 2, mixed extent pages 2.
Object ID 2, Index ID 0, data extents 0, pages 6, mixed extent pages 6.
Object ID 2, Index ID 255, index extents 6, pages 37, mixed extent
pages 22.
Object ID 3, Index ID 0, data extents 1, pages 14, mixed extent pages 9.
Object ID 3, Index ID 2, index extents 0, pages 7, mixed extent pages 7.
Object ID 4, Index ID 0, data extents 0, pages 3, mixed extent pages 3.
Object ID 4, Index ID 2, index extents 0, pages 2, mixed extent pages 2.
Object ID 6, Index ID 0, data extents 3, pages 27, mixed extent pages
10.
Object ID 8, Index ID 0, data extents 0, pages 2, mixed extent pages 2.
Object ID 9, Index ID 0, data extents 0, pages 3, mixed extent pages 3.
Object ID 10, Index ID 0, data extents 0, pages 3, mixed extent pages 3.
Object ID 10, Index ID 2, index extents 0, pages 2, mixed extent pages
2.
Object ID 10, Index ID 3, index extents 0, pages 2, mixed extent pages
2.
Object ID 12, Index ID 0, data extents 0, pages 5, mixed extent pages 5.
Object ID 12, Index ID 2, index extents 0, pages 4, mixed extent pages
4.
Object ID 96, Index ID 0, data extents 0, pages 3, mixed extent pages 3.
Object ID 96, Index ID 2, index extents 0, pages 2, mixed extent pages
2.
Object ID 99, Index ID 0, data extents 22, pages 91, mixed extent pages
1.
Object ID 1061578820, Index ID 0, data extents 2261, pages 18102, mixed
extent pages 26.
Object ID 1061578820, Index ID 4, index extents 243, pages 1957, mixed
extent pages 24.
Object ID 1061578820, Index ID 5, index extents 283, pages 2274, mixed
extent pages 26.
Object ID 1061578820, Index ID 255, index extents 11036, pages 88231,
mixed extent pages 28.
Object ID 1333579789, Index ID 0, data extents 0, pages 3, mixed extent
pages 3.
Object ID 1333579789, Index ID 2, index extents 0, pages 2, mixed
extent pages 2.
Object ID 1333579789, Index ID 5, index extents 0, pages 2, mixed
extent pages 2.
Object ID 1509580416, Index ID 0, data extents 0, pages 7, mixed extent
pages 7.
Object ID 1509580416, Index ID 3, index extents 0, pages 2, mixed
extent pages 2.
Object ID 1669580986, Index ID 0, data extents 0, pages 4, mixed extent
pages 4.
Object ID 1669580986, Index ID 2, index extents 0, pages 2, mixed
extent pages 2.
Object ID 1669580986, Index ID 3, index extents 0, pages 2, mixed
extent pages 2.
Object ID 1993058136, Index ID 0, data extents 0, pages 3, mixed extent
pages 3.
Object ID 2009058193, Index ID 0, data extents 5597, pages 44780, mixed
extent pages 10.
Object ID 2009058193, Index ID 6, index extents 1266, pages 10119,
mixed extent pages 10.
Object ID 2009058193, Index ID 255, index extents 1133279, pages
9062113, mixed extent pages 28.
Object ID 2025058250, Index ID 0, data extents 102, pages 821, mixed
extent pages 26.
Object ID 2025058250, Index ID 255, index extents 93, pages 487, mixed
extent pages 20.
Total number of extents = 1154232, used pages = 9229133, reserved pages =9233856 in this database.
(number of mixed extents = 40, mixed pages = 320) in this database.
CHECKALLOC found 0 allocation errors and 0 consistency errors in database
[Database Name].
DBCC execution completed. If DBCC printed error messages, contact your
system administrator.
--
Message posted via http://www.sqlmonster.com|||Hey, Robert? Didn't you just ask the same question back in February?
Paul Randal answered your question. You can't do much better than straight
from the horses mouth, as they say.
"The space used by the deleted text cannot be reclaimed in SQL Server 2000.
The (clumsy) solution is to bcp-out then in. In SQL Server 2005 we've fixed
this by adding LOB compaction to the shrink and defrag operations.
Regards
--
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine"
Sincerely,
Anthony Thomas
"Robert Richards via SQLMonster.com" <forum@.SQLMonster.com> wrote in message
news:4ba2145437d645fb8d8d110f8c01ea71@.SQLMonster.com...
I do not believe I am being understood. When I delete the 500,000, it is
not to an archive table. These 500,000 records are gone.
The 80GB max is due to available disk space.
When I delete records, I need to reclaim the unused space. Are the two
steps below a way of doing that?
1. Perform the weekly purge (deleting approximately 500,000 records).
2. Shrink the datafile "dbcc shrinkfile ( filename, TRUNCATEONLY)"
Is there a better way to reclaim unused space?
--
Message posted via http://www.sqlmonster.com|||I work with a much smaller subset in my TEST environment and space is
reclaimed after a delete. It doesn't make sense why it does not reclaim any
space on a larger scale.
--
Message posted via http://www.sqlmonster.com|||Just curious, how did the FirstIAM tell you that it was at the head of the
file? Sounds like a good thing to be able to understand. Thanks for all
your help.
--
Message posted via http://www.sqlmonster.com|||This is a multi-part message in MIME format.
--020600010101060506080807
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
BOL gives a good summary of how IAM (Index Allocation Map) pages work:
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/architec/8_ar_da2_1bar.asp
This is also a fairly helpful page in understanding index structures:
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/architec/8_ar_da2_8sit.asp
Basically, extents used by indexes (and LOBs) are tracked by a linked
list of IAM pages. The head of that linked list is called the
FirstIAM. So, if you have a scan through the results of that DBCC
CHECKALLOC you did, you'll see the head of the linked list of IAM pages
that tracks the LOB data for the IFSMessages table is at 1:268 (this
represents the 268th page in file 1; if your filegroup has only a single
file in it then the file number is irrelevant). Even though this page is
not at the very beginning of the file (all the system tables in the
database, eg. sysobjects (1:10), start their IAM page chain before
1:268), it's far closer to the beginning than the end (eg. the clustered
index of BTSError starts at 1:1076526).
While this is a moderately interesting fact, I'm not sure how relevant
it really is because the IAM pages (and the extents that they map) are
not necessarily in consecutive order, depending on how fragmented they
are. Remember, it's a linked list and the links could jump all over the
place. It's probably a reasonable assumption, however, that the IAM
pages in the LOB chain for IFSMessages are all together because that
data accounts for the vast majority of the data in the database (as
we've already determined) and if the chain starts at 1:268 and others
start at 1:1076526 (and above) then the huge gap between those points is
most likely filled with the rest of the IAM chain for IFSMessage LOB data.
Does that make sense?
What that really means, to me at least, is that most of those IAM pages
(for the IFSMessage LOB data) were allocated before (chronologically)
the IAM pages for other objects in your DB (like BTSError for example,
and even the clustered index of IFSMessages itself). I'm not sure how
that's possible (that the LOB IAM chain is all before the clustered
index IAM chain for the same table). Perhaps the head of the IAM chain
for the clustered index got shuffled towards the end during an index
rebuild (I don't know if index rebuilds & defrags change the order of
the IAM pages or just the extents that they refer to). Do any SQL
Server architecture gurus out there want to confirm/deny that?
--
*mike hodgson* |/ database administrator/ | mallesons stephen jaques
*T* +61 (2) 9296 3668 |* F* +61 (2) 9296 3885 |* M* +61 (408) 675 907
*E* mailto:mike.hodgson@.mallesons.nospam.com |* W* http://www.mallesons.com
Robert Richards via SQLMonster.com wrote:
>Just curious, how did the FirstIAM tell you that it was at the head of the
>file? Sounds like a good thing to be able to understand. Thanks for all
>your help.
>
>
--020600010101060506080807
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type">
<title></title>
</head>
<body bgcolor="#ffffff" text="#000000">
<tt>BOL gives a good summary of how IAM </tt><tt>(Index Allocation
Map) </tt><tt>pages work:<br>
<a class="moz-txt-link-freetext" href="http://links.10026.com/?link=http://msdn.microsoft.com/library/default.asp?url=/library/en-us/architec/8_ar_da2_1bar.asp</a><br>">http://msdn.microsoft.com/library/default.asp?url=/library/en-us/architec/8_ar_da2_1bar.asp">http://msdn.microsoft.com/library/default.asp?url=/library/en-us/architec/8_ar_da2_1bar.asp</a><br>
<br>
This is also a fairly helpful page in understanding index structures:<br>
<a class="moz-txt-link-freetext" href="http://links.10026.com/?link=http://msdn.microsoft.com/library/default.asp?url=/library/en-us/architec/8_ar_da2_8sit.asp</a><br>">http://msdn.microsoft.com/library/default.asp?url=/library/en-us/architec/8_ar_da2_8sit.asp">http://msdn.microsoft.com/library/default.asp?url=/library/en-us/architec/8_ar_da2_8sit.asp</a><br>
<br>
Basically, extents used by indexes (and LOBs) are tracked by a linked
list of IAM pages. The head of that linked list is called the
FirstIAM. So, if you have a scan through the results of that DBCC
CHECKALLOC you did, you'll see the head of the linked list of IAM pages
that tracks the LOB data for the IFSMessages table is at 1:268 (this
represents the 268th page in file 1; if your filegroup has only a
single file in it then the file number is irrelevant). Even though this
page is not at the very beginning of the file (all the system tables in
the database, eg. sysobjects (1:10), start their IAM page chain before
1:268), it's far closer to the beginning than the end (eg. the
clustered index of BTSError starts at 1:1076526).<br>
<br>
While this is a moderately interesting fact, I'm not sure how relevant
it really is because the IAM pages (and the extents that they map) are
not necessarily in consecutive order, depending on how fragmented they
are. Remember, it's a linked list and the links could jump all over
the place. It's probably a reasonable assumption, however, that the
IAM pages in the LOB chain for IFSMessages are all together because
that data accounts for the vast majority of the data in the database
(as we've already determined) and if the chain starts at 1:268 and
others start at 1:1076526 (and above) then the huge gap between those
points is most likely filled with the rest of the IAM chain for
IFSMessage LOB data.<br>
<br>
Does that make sense?<br>
<br>
What that really means, to me at least, is that most of those IAM pages
(for the IFSMessage LOB data) were allocated before (chronologically)
the IAM pages for other objects in your DB (like BTSError for example,
and even the clustered index of IFSMessages itself). I'm not sure how
that's possible (that the LOB IAM chain is all before the clustered
index IAM chain for the same table). Perhaps the head of the IAM chain
for the clustered index got shuffled towards the end during an index
rebuild (I don't know if index rebuilds & defrags change the order
of the IAM pages or just the extents that they refer to). Do any SQL
Server architecture gurus out there want to confirm/deny that?<br>
</tt>
<div class="moz-signature">
<title></title>
<meta http-equiv="Content-Type" content="text/html; ">
<p><span lang="en-au"><font face="Tahoma" size="2">--<br>
</font> </span><b><span lang="en-au"><font face="Tahoma" size="2">mike
hodgson</font></span></b><span lang="en-au"> <font face="Tahoma"
size="2">|</font><i><font face="Tahoma"> </font><font face="Tahoma"
size="2"> database administrator</font></i><font face="Tahoma" size="2">
| mallesons</font><font face="Tahoma"> </font><font face="Tahoma"
size="2">stephen</font><font face="Tahoma"> </font><font face="Tahoma"
size="2"> jaques</font><font face="Tahoma"><br>
</font><b><font face="Tahoma" size="2">T</font></b><font face="Tahoma"
size="2"> +61 (2) 9296 3668 |</font><b><font face="Tahoma"> </font><font
face="Tahoma" size="2"> F</font></b><font face="Tahoma" size="2"> +61
(2) 9296 3885 |</font><b><font face="Tahoma"> </font><font
face="Tahoma" size="2">M</font></b><font face="Tahoma" size="2"> +61
(408) 675 907</font><br>
<b><font face="Tahoma" size="2">E</font></b><font face="Tahoma" size="2">
<a href="http://links.10026.com/?link=mailto:mike.hodgson@.mallesons.nospam.com">
mailto:mike.hodgson@.mallesons.nospam.com</a> |</font><b><font
face="Tahoma"> </font><font face="Tahoma" size="2">W</font></b><font
face="Tahoma" size="2"> <a href="http://links.10026.com/?link=/">http://www.mallesons.com">
http://www.mallesons.com</a></font></span> </p>
</div>
<br>
<br>
Robert Richards via SQLMonster.com wrote:
<blockquote cite="mide53454dc2ecf44559526bc0dd9b86411@.SQLMonster.com"
type="cite">
<pre wrap="">Just curious, how did the FirstIAM tell you that it was at the head of the
file? Sounds like a good thing to be able to understand. Thanks for all
your help.
</pre>
</blockquote>
</body>
</html>
--020600010101060506080807--|||It is not a matter of scale; it is just that the scale makes the matter more
visable.
Once the pages are marked for deletion, they are not reclaimed, but can be
reused. On a smaller scale, this happens a lot sooner.
I just hadn't realized that no Database Maintenance operation could clear up
old unused LOB pages. But, once deleted, new data could reuse it. In your
case, however, you are slowly reducing the need of this database and were
hoping to decrease its size over time. Apparently, your only option is to
Create a temp table, move the data, drop the old table, and rename the temp
in order to reclaim this space, accounting for RI restrictions, of course.
I would consider scheduling this activity after every large scale delete:
weekly, monthly, or quarterly, as the case may be.
Sincerely,
Anthony Thomas
"Robert Richards via SQLMonster.com" <forum@.SQLMonster.com> wrote in message
news:60a2af8b2b12488b8acc76fa34b27bd0@.SQLMonster.com...
I work with a much smaller subset in my TEST environment and space is
reclaimed after a delete. It doesn't make sense why it does not reclaim any
space on a larger scale.
--
Message posted via http://www.sqlmonster.com