My datafile is at 80GB. The data used in the datafile is about 70GB and
growing, with approximately 4,000,000 records in a table that contains a
Text field. I have a weekly purge that deletes about 500,000 records. (Disk
space is at a premium.) When I delete the 500,000 records in the table, the
data used in the data file does not reduce, but remains the same and keeps
on growing. I need to keep the datafile comfortably below 80GB.
My understanding is that shrinking the data file is not the best thing to
do, but if I have to, then I will. So essentially, to keep the data file
comfortably below 80GB I would:
1. Perform the weekly purge (deleting approximately 500,000 records).
2. Shrink the datafile "dbcc shrinkfile ( filename, TRUNCATEONLY)"
Would the above 2 steps be the means of keeping my datafile comfortably
below 80GB?
Message posted via http://www.droptable.com
You can turn off auto-grow option and lock your
datafile size on 80Gb. Make sure you have enough space
for your transaction log though in order to be able to
delete records weekly.
Shrink is really not good operation especially on large DBs.
Regards.
"Robert Richards via droptable.com" wrote:
> My datafile is at 80GB. The data used in the datafile is about 70GB and
> growing, with approximately 4,000,000 records in a table that contains a
> Text field. I have a weekly purge that deletes about 500,000 records. (Disk
> space is at a premium.) When I delete the 500,000 records in the table, the
> data used in the data file does not reduce, but remains the same and keeps
> on growing. I need to keep the datafile comfortably below 80GB.
> My understanding is that shrinking the data file is not the best thing to
> do, but if I have to, then I will. So essentially, to keep the data file
> comfortably below 80GB I would:
> 1. Perform the weekly purge (deleting approximately 500,000 records).
> 2. Shrink the datafile "dbcc shrinkfile ( filename, TRUNCATEONLY)"
> Would the above 2 steps be the means of keeping my datafile comfortably
> below 80GB?
> --
> Message posted via http://www.droptable.com
>
|||That does not sound like a solution, turning off auto grow. The reason why
is that at the current rate, the data file is growing and will eventually
fill up the 80GB. The purge is not reducing space in the data file.
Probably due to a high water mark. Therefore, even though I am keeping the
number of records at 4,000,000 or below, the unused space never gets
recovered, thus the datafile keeps growing even though the number of
records remains approximately the same.
I understand that ShrinkFile is not the best option, but what other option
is there?
Message posted via http://www.droptable.com
|||Hi,
What you are performing is absolutely perfect. Do a weekly purge or move the
old data into a history database.
Since you are going to store the data back into the same file you may not
need to shrink the file because the space you
purged will be utilized for the new data which is coming in.
What is the main reason you need to keep 80GB as maximum for your database?
There are many databases in the globe with more than Terabytes
and running smoothly.
Thanks
Hari
Sql Server MVP
"Robert Richards via droptable.com" <forum@.nospam.droptable.com> wrote in
message news:0be9557414954ceca658ef750e5bb45e@.droptable.co m...
> My datafile is at 80GB. The data used in the datafile is about 70GB and
> growing, with approximately 4,000,000 records in a table that contains a
> Text field. I have a weekly purge that deletes about 500,000 records.
> (Disk
> space is at a premium.) When I delete the 500,000 records in the table,
> the
> data used in the data file does not reduce, but remains the same and keeps
> on growing. I need to keep the datafile comfortably below 80GB.
> My understanding is that shrinking the data file is not the best thing to
> do, but if I have to, then I will. So essentially, to keep the data file
> comfortably below 80GB I would:
> 1. Perform the weekly purge (deleting approximately 500,000 records).
> 2. Shrink the datafile "dbcc shrinkfile ( filename, TRUNCATEONLY)"
> Would the above 2 steps be the means of keeping my datafile comfortably
> below 80GB?
> --
> Message posted via http://www.droptable.com
|||I do not believe I am being understood. When I delete the 500,000, it is
not to an archive table. These 500,000 records are gone.
The 80GB max is due to available disk space.
When I delete records, I need to reclaim the unused space. Are the two
steps below a way of doing that?
1. Perform the weekly purge (deleting approximately 500,000 records).
2. Shrink the datafile "dbcc shrinkfile ( filename, TRUNCATEONLY)"
Is there a better way to reclaim unused space?
Message posted via http://www.droptable.com
|||What you are seeing is mainly due to the fact you have text or image
columns. Even though you delete rows it may not always free up all the
space it previously used for the text columns. If you have a clustered
index on the table you can reindex the table and possibly free up some space
that may have been due to fragmentation of the non-text columns. But in
2000 there is nothing you can do to clean up text space usage short of
exporting all the data, truncating the table and importing it back in. SQL
2005 will allow you to reorganize blobs. Your real solution is to get more
disk space so you can deal with it better.
Andrew J. Kelly SQL MVP
"Robert Richards via droptable.com" <forum@.nospam.droptable.com> wrote in
message news:0be9557414954ceca658ef750e5bb45e@.droptable.co m...
> My datafile is at 80GB. The data used in the datafile is about 70GB and
> growing, with approximately 4,000,000 records in a table that contains a
> Text field. I have a weekly purge that deletes about 500,000 records.
> (Disk
> space is at a premium.) When I delete the 500,000 records in the table,
> the
> data used in the data file does not reduce, but remains the same and keeps
> on growing. I need to keep the datafile comfortably below 80GB.
> My understanding is that shrinking the data file is not the best thing to
> do, but if I have to, then I will. So essentially, to keep the data file
> comfortably below 80GB I would:
> 1. Perform the weekly purge (deleting approximately 500,000 records).
> 2. Shrink the datafile "dbcc shrinkfile ( filename, TRUNCATEONLY)"
> Would the above 2 steps be the means of keeping my datafile comfortably
> below 80GB?
> --
> Message posted via http://www.droptable.com
|||Why then does the "data used" decrease on the data file when I delete
records from the same table in my Test environment, but when I delete
records from the table in production the "data used" does not decrease?
Message posted via http://www.droptable.com
|||How are you determining the "data used"? Are you using sp_spaceused? Have
you tried running DBCC UPDATE USAGE or specifying the UpdateUasge option?
Andrew J. Kelly SQL MVP
"Robert Richards via droptable.com" <forum@.droptable.com> wrote in message
news:ce78b9bd5c5447658b8a0e2badaf82cb@.droptable.co m...
> Why then does the "data used" decrease on the data file when I delete
> records from the same table in my Test environment, but when I delete
> records from the table in production the "data used" does not decrease?
> --
> Message posted via http://www.droptable.com
|||Also, the TRUCATE ONLY option will not Free Up the space becaue truncate
will only remove from the end of the file. You need to MOVE all the data to
the head of the file before shrinking. You can only do that if you use the
DBCC SHRINKFILE(MyDB_Data, TargetSize).
Reindexing the clustered index will most likely cause the file to grow even
more because the entire table is basically copied to a new when as it
reindexes. However, if you reindex, that will defragment by the index
order. Then if you shrink the file, it will move the data pages in
defragmented order, but could take quite some time, especially on a 80 GB
database.
You should consider creating 1 to 1 relationships for the LOB data and their
associated base tables. Then you could create VIEWs to replace the original
table definitions to minimize code impact. Then put all of the LOB data in
a seperate table(s). Those tables could then be created in seperate files
on seperate FileGroups. The shrinking and reordering just those files
should minimize the durations of those operations.
If you want to garauntee what you are looking at, run sp_spaceused
@.UPDATEUSAGE = 'true'.
Sincerely,
Anthony Thomas
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:ecOFGg9ZFHA.3168@.TK2MSFTNGP10.phx.gbl...
How are you determining the "data used"? Are you using sp_spaceused? Have
you tried running DBCC UPDATE USAGE or specifying the UpdateUasge option?
Andrew J. Kelly SQL MVP
"Robert Richards via droptable.com" <forum@.droptable.com> wrote in message
news:ce78b9bd5c5447658b8a0e2badaf82cb@.droptable.co m...
> Why then does the "data used" decrease on the data file when I delete
> records from the same table in my Test environment, but when I delete
> records from the table in production the "data used" does not decrease?
> --
> Message posted via http://www.droptable.com
|||I am still not clear how or why my data continues to increase. What you say
I cannot quite comprehend given my current data set. For instance:
5/21/2005
Records in table: 4,091,571
Data used within data file: 64,546.3 MB
6/5/2005
Records in table: 3,679,559
Data used within data file: 71,828.5 MB
So even though I have 412,012 less records in the table (the only other
tables are small, static lookup tables) the data within the data file has
grown 7,282.2 MB.
I guess I just do not understand the text column well enough, to understand
why the substantial growth, despite the significantly less records in the
table.
Please help, as I am going to have to explain this to my supervisor in such
a way to either justify more disk space, or resolve the growth issue
despite less records. Thanks for your patience.
Message posted via http://www.droptable.com
No comments:
Post a Comment