Showing posts with label guys. Show all posts
Showing posts with label guys. Show all posts

Thursday, March 29, 2012

Data File Size

Thanks Guys,
I understand that if the database is at a point that it's forced to autogrow
that will degrade performance while the database is growing. I want to know
what the impact of a say 90-95% full database is. This is with out an
autogrow situation. There will be updates, inserts, and reads but we never
reach full capicity. Does a database say that is 60-75% full perfom better
than a database that is 90-95% full or is there no known impact.
"TheSQLGuru" wrote:

> Best practice is to size your database proactively and only let autogrowth
> act in 'emergency' cases. Size the database to allow for 1-2 years of
> expected growth, and revisit at least every 6 months. This will allow for
> maintenance operations like index rebuilds/reorgs to be able to lay data
> down sequentially on disk for optimal performance and also avoid autogrowth
> slowdowns during peak periods.
> --
> Kevin G. Boles
> Indicium Resources, Inc.
> SQL Server MVP
> kgboles a earthlink dt net
>
> "JDS" <JDS@.discussions.microsoft.com> wrote in message
> news:6D4CD763-FC56-4660-9097-8AC4E188D9AF@.microsoft.com...
>
>
Just to add more to what Kevin was getting at. Chances are (not 100%
guaranteed) that the closer you get to a full database the more the data
will be non-contiguous in the data files. When you reindex an index with
DBCC DBREINDEX or ALTER INDEX REBUILD the engine will create an entirely new
copy of the index in the file and then drop the old one when done. So first
off you need about 1.2 times the size of the index in free space just to
rebuild it. But if you only have that exact amount or anything close you
will most likely get the new index built in small fragments or pockets
within the data file(s) where ever there happened to be room. Where as with
plenty of free space the chances are much greater that you will have the
indexes built in a contiguous fashion. And if you do any range or table
scans, read - ahead's etc. this can make a big difference in performance.
Andrew J. Kelly SQL MVP
Solid Quality Mentors
"JDS" <JDS@.discussions.microsoft.com> wrote in message
news:F5291FF9-8516-437D-9053-3814FB5835B1@.microsoft.com...[vbcol=seagreen]
> Thanks Guys,
> I understand that if the database is at a point that it's forced to
> autogrow
> that will degrade performance while the database is growing. I want to
> know
> what the impact of a say 90-95% full database is. This is with out an
> autogrow situation. There will be updates, inserts, and reads but we
> never
> reach full capicity. Does a database say that is 60-75% full perfom
> better
> than a database that is 90-95% full or is there no known impact.
> "TheSQLGuru" wrote:

Tuesday, March 27, 2012

Data File Management on SQL Server 2000

Hi Guys,
I have got some serious data management issues with my company's
implementation of about 70 some odd databases on SQL Server 2000. The server
where the databases are physically located has been partitioned into 5
drives, 4 of which are used to store our data files. I am running into some
problems managing the disk space because the used space on each of these
drives is growing disproportionately.
The problem arises from the fact that when these databases were created, the
main data files were allowed to have unrestricted growth, so what has
happened now is that one of the 4 drives is chock full of data while the
others have a lot of free space. My question to you all is, would there be a
way to take say about half the data from the main data file in each of my
databases and redistribute it among the secondary data files? This way, I
can have much more proportionate growth amongst my server drives.
Any ideas would be greatly apprecited. Thanks!I had a simliar problem. A 260GB database had one data file which was
impossible to work with. What I did was to stop any growth on the main file
and created secondary files of fixed size(20GB). As data was written to the
DB, SQL managed the process of balancing where data was located. If you want
a particualr file to grow, trun itsgrowth on, SQL will be forced to write
data to it. Just ensure you have enough free space and shrink the main file
when you get a chance, you will see the data redistributed over all the files
and start to equal out.
"Shishir Viriyala" wrote:
> Hi Guys,
> I have got some serious data management issues with my company's
> implementation of about 70 some odd databases on SQL Server 2000. The server
> where the databases are physically located has been partitioned into 5
> drives, 4 of which are used to store our data files. I am running into some
> problems managing the disk space because the used space on each of these
> drives is growing disproportionately.
> The problem arises from the fact that when these databases were created, the
> main data files were allowed to have unrestricted growth, so what has
> happened now is that one of the 4 drives is chock full of data while the
> others have a lot of free space. My question to you all is, would there be a
> way to take say about half the data from the main data file in each of my
> databases and redistribute it among the secondary data files? This way, I
> can have much more proportionate growth amongst my server drives.
> Any ideas would be greatly apprecited. Thanks!
>|||Thanks for the prompt reply! That is a great idea and I definitely intend to
use it, but what about some of my main data files right now that are already
so very big? How do I get some of the used space reduced so that I can cap
off each main data file at around 40 GB?
Thanks again!
"Jeffrey K. Ericson" wrote:
> I had a simliar problem. A 260GB database had one data file which was
> impossible to work with. What I did was to stop any growth on the main file
> and created secondary files of fixed size(20GB). As data was written to the
> DB, SQL managed the process of balancing where data was located. If you want
> a particualr file to grow, trun itsgrowth on, SQL will be forced to write
> data to it. Just ensure you have enough free space and shrink the main file
> when you get a chance, you will see the data redistributed over all the files
> and start to equal out.
> "Shishir Viriyala" wrote:
> > Hi Guys,
> >
> > I have got some serious data management issues with my company's
> > implementation of about 70 some odd databases on SQL Server 2000. The server
> > where the databases are physically located has been partitioned into 5
> > drives, 4 of which are used to store our data files. I am running into some
> > problems managing the disk space because the used space on each of these
> > drives is growing disproportionately.
> >
> > The problem arises from the fact that when these databases were created, the
> > main data files were allowed to have unrestricted growth, so what has
> > happened now is that one of the 4 drives is chock full of data while the
> > others have a lot of free space. My question to you all is, would there be a
> > way to take say about half the data from the main data file in each of my
> > databases and redistribute it among the secondary data files? This way, I
> > can have much more proportionate growth amongst my server drives.
> >
> > Any ideas would be greatly apprecited. Thanks!
> >|||I found that this was a process that took several weeks to drop the main file
down. Beyond that, you'd have to relocate specific tables to specific files.
This would include dropping and transferring data. I wasn't willing to do
that but this was also a decision support system where all the data came at
night from upstream systems.
"Shishir Viriyala" wrote:
> Thanks for the prompt reply! That is a great idea and I definitely intend to
> use it, but what about some of my main data files right now that are already
> so very big? How do I get some of the used space reduced so that I can cap
> off each main data file at around 40 GB?
> Thanks again!
> "Jeffrey K. Ericson" wrote:
> > I had a simliar problem. A 260GB database had one data file which was
> > impossible to work with. What I did was to stop any growth on the main file
> > and created secondary files of fixed size(20GB). As data was written to the
> > DB, SQL managed the process of balancing where data was located. If you want
> > a particualr file to grow, trun itsgrowth on, SQL will be forced to write
> > data to it. Just ensure you have enough free space and shrink the main file
> > when you get a chance, you will see the data redistributed over all the files
> > and start to equal out.
> >
> > "Shishir Viriyala" wrote:
> >
> > > Hi Guys,
> > >
> > > I have got some serious data management issues with my company's
> > > implementation of about 70 some odd databases on SQL Server 2000. The server
> > > where the databases are physically located has been partitioned into 5
> > > drives, 4 of which are used to store our data files. I am running into some
> > > problems managing the disk space because the used space on each of these
> > > drives is growing disproportionately.
> > >
> > > The problem arises from the fact that when these databases were created, the
> > > main data files were allowed to have unrestricted growth, so what has
> > > happened now is that one of the 4 drives is chock full of data while the
> > > others have a lot of free space. My question to you all is, would there be a
> > > way to take say about half the data from the main data file in each of my
> > > databases and redistribute it among the secondary data files? This way, I
> > > can have much more proportionate growth amongst my server drives.
> > >
> > > Any ideas would be greatly apprecited. Thanks!
> > >|||Thank you. I think I will have to go the route of manually dropping and
transferring tables to manage used space. Appreciate the help!
"Jeffrey K. Ericson" wrote:
> I found that this was a process that took several weeks to drop the main file
> down. Beyond that, you'd have to relocate specific tables to specific files.
> This would include dropping and transferring data. I wasn't willing to do
> that but this was also a decision support system where all the data came at
> night from upstream systems.
> "Shishir Viriyala" wrote:
> > Thanks for the prompt reply! That is a great idea and I definitely intend to
> > use it, but what about some of my main data files right now that are already
> > so very big? How do I get some of the used space reduced so that I can cap
> > off each main data file at around 40 GB?
> >
> > Thanks again!
> >
> > "Jeffrey K. Ericson" wrote:
> >
> > > I had a simliar problem. A 260GB database had one data file which was
> > > impossible to work with. What I did was to stop any growth on the main file
> > > and created secondary files of fixed size(20GB). As data was written to the
> > > DB, SQL managed the process of balancing where data was located. If you want
> > > a particualr file to grow, trun itsgrowth on, SQL will be forced to write
> > > data to it. Just ensure you have enough free space and shrink the main file
> > > when you get a chance, you will see the data redistributed over all the files
> > > and start to equal out.
> > >
> > > "Shishir Viriyala" wrote:
> > >
> > > > Hi Guys,
> > > >
> > > > I have got some serious data management issues with my company's
> > > > implementation of about 70 some odd databases on SQL Server 2000. The server
> > > > where the databases are physically located has been partitioned into 5
> > > > drives, 4 of which are used to store our data files. I am running into some
> > > > problems managing the disk space because the used space on each of these
> > > > drives is growing disproportionately.
> > > >
> > > > The problem arises from the fact that when these databases were created, the
> > > > main data files were allowed to have unrestricted growth, so what has
> > > > happened now is that one of the 4 drives is chock full of data while the
> > > > others have a lot of free space. My question to you all is, would there be a
> > > > way to take say about half the data from the main data file in each of my
> > > > databases and redistribute it among the secondary data files? This way, I
> > > > can have much more proportionate growth amongst my server drives.
> > > >
> > > > Any ideas would be greatly apprecited. Thanks!
> > > >|||Actually, you could accomplish exactly what Jeff was discribing, turn
AUTOGROW off of the main file and shrink it down to its minimum size, then,
all you have to do is find some of your large tables and rebuild the cluster
index.
Because rebuilding the index must process the pages in sorted order, the
system must copy the information to new pages. If the main file is capped,
then the data must go to the secondary files.
If you rebuild the Clustered index, you will be relocating the data pages
themselves. Because a rebuild of the Cluster Index forces a rebuild of the
non-clustered indexes, the Index Pages for that table will move as well.
Just make sure you shrink the main file down again before you start
rebuilding indexes on the next table.
It's a little slow going, but faster than letting it happen just through
normal usage.
Sincerely,
Anthony Thomas
"Shishir Viriyala" <ShishirViriyala@.discussions.microsoft.com> wrote in
message news:0B400C78-8FD5-416B-AE77-6CFD75CAFEB9@.microsoft.com...
Thank you. I think I will have to go the route of manually dropping and
transferring tables to manage used space. Appreciate the help!
"Jeffrey K. Ericson" wrote:
> I found that this was a process that took several weeks to drop the main
file
> down. Beyond that, you'd have to relocate specific tables to specific
files.
> This would include dropping and transferring data. I wasn't willing to
do
> that but this was also a decision support system where all the data came
at
> night from upstream systems.
> "Shishir Viriyala" wrote:
> > Thanks for the prompt reply! That is a great idea and I definitely
intend to
> > use it, but what about some of my main data files right now that are
already
> > so very big? How do I get some of the used space reduced so that I can
cap
> > off each main data file at around 40 GB?
> >
> > Thanks again!
> >
> > "Jeffrey K. Ericson" wrote:
> >
> > > I had a simliar problem. A 260GB database had one data file which was
> > > impossible to work with. What I did was to stop any growth on the
main file
> > > and created secondary files of fixed size(20GB). As data was written
to the
> > > DB, SQL managed the process of balancing where data was located. If
you want
> > > a particualr file to grow, trun itsgrowth on, SQL will be forced to
write
> > > data to it. Just ensure you have enough free space and shrink the
main file
> > > when you get a chance, you will see the data redistributed over all
the files
> > > and start to equal out.
> > >
> > > "Shishir Viriyala" wrote:
> > >
> > > > Hi Guys,
> > > >
> > > > I have got some serious data management issues with my company's
> > > > implementation of about 70 some odd databases on SQL Server 2000.
The server
> > > > where the databases are physically located has been partitioned into
5
> > > > drives, 4 of which are used to store our data files. I am running
into some
> > > > problems managing the disk space because the used space on each of
these
> > > > drives is growing disproportionately.
> > > >
> > > > The problem arises from the fact that when these databases were
created, the
> > > > main data files were allowed to have unrestricted growth, so what
has
> > > > happened now is that one of the 4 drives is chock full of data while
the
> > > > others have a lot of free space. My question to you all is, would
there be a
> > > > way to take say about half the data from the main data file in each
of my
> > > > databases and redistribute it among the secondary data files? This
way, I
> > > > can have much more proportionate growth amongst my server drives.
> > > >
> > > > Any ideas would be greatly apprecited. Thanks!
> > > >

Data File Management on SQL Server 2000

Hi Guys,
I have got some serious data management issues with my company's
implementation of about 70 some odd databases on SQL Server 2000. The server
where the databases are physically located has been partitioned into 5
drives, 4 of which are used to store our data files. I am running into some
problems managing the disk space because the used space on each of these
drives is growing disproportionately.
The problem arises from the fact that when these databases were created, the
main data files were allowed to have unrestricted growth, so what has
happened now is that one of the 4 drives is chock full of data while the
others have a lot of free space. My question to you all is, would there be a
way to take say about half the data from the main data file in each of my
databases and redistribute it among the secondary data files? This way, I
can have much more proportionate growth amongst my server drives.
Any ideas would be greatly apprecited. Thanks!
I had a simliar problem. A 260GB database had one data file which was
impossible to work with. What I did was to stop any growth on the main file
and created secondary files of fixed size(20GB). As data was written to the
DB, SQL managed the process of balancing where data was located. If you want
a particualr file to grow, trun itsgrowth on, SQL will be forced to write
data to it. Just ensure you have enough free space and shrink the main file
when you get a chance, you will see the data redistributed over all the files
and start to equal out.
"Shishir Viriyala" wrote:

> Hi Guys,
> I have got some serious data management issues with my company's
> implementation of about 70 some odd databases on SQL Server 2000. The server
> where the databases are physically located has been partitioned into 5
> drives, 4 of which are used to store our data files. I am running into some
> problems managing the disk space because the used space on each of these
> drives is growing disproportionately.
> The problem arises from the fact that when these databases were created, the
> main data files were allowed to have unrestricted growth, so what has
> happened now is that one of the 4 drives is chock full of data while the
> others have a lot of free space. My question to you all is, would there be a
> way to take say about half the data from the main data file in each of my
> databases and redistribute it among the secondary data files? This way, I
> can have much more proportionate growth amongst my server drives.
> Any ideas would be greatly apprecited. Thanks!
>
|||Thanks for the prompt reply! That is a great idea and I definitely intend to
use it, but what about some of my main data files right now that are already
so very big? How do I get some of the used space reduced so that I can cap
off each main data file at around 40 GB?
Thanks again!
"Jeffrey K. Ericson" wrote:
[vbcol=seagreen]
> I had a simliar problem. A 260GB database had one data file which was
> impossible to work with. What I did was to stop any growth on the main file
> and created secondary files of fixed size(20GB). As data was written to the
> DB, SQL managed the process of balancing where data was located. If you want
> a particualr file to grow, trun itsgrowth on, SQL will be forced to write
> data to it. Just ensure you have enough free space and shrink the main file
> when you get a chance, you will see the data redistributed over all the files
> and start to equal out.
> "Shishir Viriyala" wrote:
|||I found that this was a process that took several weeks to drop the main file
down. Beyond that, you'd have to relocate specific tables to specific files.
This would include dropping and transferring data. I wasn't willing to do
that but this was also a decision support system where all the data came at
night from upstream systems.
"Shishir Viriyala" wrote:
[vbcol=seagreen]
> Thanks for the prompt reply! That is a great idea and I definitely intend to
> use it, but what about some of my main data files right now that are already
> so very big? How do I get some of the used space reduced so that I can cap
> off each main data file at around 40 GB?
> Thanks again!
> "Jeffrey K. Ericson" wrote:
|||Thank you. I think I will have to go the route of manually dropping and
transferring tables to manage used space. Appreciate the help!
"Jeffrey K. Ericson" wrote:
[vbcol=seagreen]
> I found that this was a process that took several weeks to drop the main file
> down. Beyond that, you'd have to relocate specific tables to specific files.
> This would include dropping and transferring data. I wasn't willing to do
> that but this was also a decision support system where all the data came at
> night from upstream systems.
> "Shishir Viriyala" wrote:
|||Actually, you could accomplish exactly what Jeff was discribing, turn
AUTOGROW off of the main file and shrink it down to its minimum size, then,
all you have to do is find some of your large tables and rebuild the cluster
index.
Because rebuilding the index must process the pages in sorted order, the
system must copy the information to new pages. If the main file is capped,
then the data must go to the secondary files.
If you rebuild the Clustered index, you will be relocating the data pages
themselves. Because a rebuild of the Cluster Index forces a rebuild of the
non-clustered indexes, the Index Pages for that table will move as well.
Just make sure you shrink the main file down again before you start
rebuilding indexes on the next table.
It's a little slow going, but faster than letting it happen just through
normal usage.
Sincerely,
Anthony Thomas

"Shishir Viriyala" <ShishirViriyala@.discussions.microsoft.com> wrote in
message news:0B400C78-8FD5-416B-AE77-6CFD75CAFEB9@.microsoft.com...
Thank you. I think I will have to go the route of manually dropping and
transferring tables to manage used space. Appreciate the help!
"Jeffrey K. Ericson" wrote:

> I found that this was a process that took several weeks to drop the main
file
> down. Beyond that, you'd have to relocate specific tables to specific
files.
> This would include dropping and transferring data. I wasn't willing to
do
> that but this was also a decision support system where all the data came
at[vbcol=seagreen]
> night from upstream systems.
> "Shishir Viriyala" wrote:
intend to[vbcol=seagreen]
already[vbcol=seagreen]
cap[vbcol=seagreen]
[vbcol=seagreen]
main file[vbcol=seagreen]
to the[vbcol=seagreen]
you want[vbcol=seagreen]
write[vbcol=seagreen]
main file[vbcol=seagreen]
the files[vbcol=seagreen]
The server[vbcol=seagreen]
5[vbcol=seagreen]
into some[vbcol=seagreen]
these[vbcol=seagreen]
created, the[vbcol=seagreen]
has[vbcol=seagreen]
the[vbcol=seagreen]
there be a[vbcol=seagreen]
of my[vbcol=seagreen]
way, I[vbcol=seagreen]
sql

Data File Management on SQL Server 2000

Hi Guys,
I have got some serious data management issues with my company's
implementation of about 70 some odd databases on SQL Server 2000. The serve
r
where the databases are physically located has been partitioned into 5
drives, 4 of which are used to store our data files. I am running into some
problems managing the disk space because the used space on each of these
drives is growing disproportionately.
The problem arises from the fact that when these databases were created, the
main data files were allowed to have unrestricted growth, so what has
happened now is that one of the 4 drives is chock full of data while the
others have a lot of free space. My question to you all is, would there be
a
way to take say about half the data from the main data file in each of my
databases and redistribute it among the secondary data files? This way, I
can have much more proportionate growth amongst my server drives.
Any ideas would be greatly apprecited. Thanks!I had a simliar problem. A 260GB database had one data file which was
impossible to work with. What I did was to stop any growth on the main file
and created secondary files of fixed size(20GB). As data was written to the
DB, SQL managed the process of balancing where data was located. If you wan
t
a particualr file to grow, trun itsgrowth on, SQL will be forced to write
data to it. Just ensure you have enough free space and shrink the main file
when you get a chance, you will see the data redistributed over all the file
s
and start to equal out.
"Shishir Viriyala" wrote:

> Hi Guys,
> I have got some serious data management issues with my company's
> implementation of about 70 some odd databases on SQL Server 2000. The ser
ver
> where the databases are physically located has been partitioned into 5
> drives, 4 of which are used to store our data files. I am running into so
me
> problems managing the disk space because the used space on each of these
> drives is growing disproportionately.
> The problem arises from the fact that when these databases were created, t
he
> main data files were allowed to have unrestricted growth, so what has
> happened now is that one of the 4 drives is chock full of data while the
> others have a lot of free space. My question to you all is, would there b
e a
> way to take say about half the data from the main data file in each of my
> databases and redistribute it among the secondary data files? This way, I
> can have much more proportionate growth amongst my server drives.
> Any ideas would be greatly apprecited. Thanks!
>|||Thanks for the prompt reply! That is a great idea and I definitely intend t
o
use it, but what about some of my main data files right now that are already
so very big? How do I get some of the used space reduced so that I can cap
off each main data file at around 40 GB?
Thanks again!
"Jeffrey K. Ericson" wrote:
[vbcol=seagreen]
> I had a simliar problem. A 260GB database had one data file which was
> impossible to work with. What I did was to stop any growth on the main fi
le
> and created secondary files of fixed size(20GB). As data was written to t
he
> DB, SQL managed the process of balancing where data was located. If you w
ant
> a particualr file to grow, trun itsgrowth on, SQL will be forced to write
> data to it. Just ensure you have enough free space and shrink the main fi
le
> when you get a chance, you will see the data redistributed over all the fi
les
> and start to equal out.
> "Shishir Viriyala" wrote:
>|||I found that this was a process that took several weeks to drop the main fil
e
down. Beyond that, you'd have to relocate specific tables to specific files
.
This would include dropping and transferring data. I wasn't willing to do
that but this was also a decision support system where all the data came at
night from upstream systems.
"Shishir Viriyala" wrote:
[vbcol=seagreen]
> Thanks for the prompt reply! That is a great idea and I definitely intend
to
> use it, but what about some of my main data files right now that are alrea
dy
> so very big? How do I get some of the used space reduced so that I can ca
p
> off each main data file at around 40 GB?
> Thanks again!
> "Jeffrey K. Ericson" wrote:
>|||Thank you. I think I will have to go the route of manually dropping and
transferring tables to manage used space. Appreciate the help!
"Jeffrey K. Ericson" wrote:
[vbcol=seagreen]
> I found that this was a process that took several weeks to drop the main f
ile
> down. Beyond that, you'd have to relocate specific tables to specific fil
es.
> This would include dropping and transferring data. I wasn't willing to d
o
> that but this was also a decision support system where all the data came a
t
> night from upstream systems.
> "Shishir Viriyala" wrote:
>|||Actually, you could accomplish exactly what Jeff was discribing, turn
AUTOGROW off of the main file and shrink it down to its minimum size, then,
all you have to do is find some of your large tables and rebuild the cluster
index.
Because rebuilding the index must process the pages in sorted order, the
system must copy the information to new pages. If the main file is capped,
then the data must go to the secondary files.
If you rebuild the Clustered index, you will be relocating the data pages
themselves. Because a rebuild of the Cluster Index forces a rebuild of the
non-clustered indexes, the Index Pages for that table will move as well.
Just make sure you shrink the main file down again before you start
rebuilding indexes on the next table.
It's a little slow going, but faster than letting it happen just through
normal usage.
Sincerely,
Anthony Thomas
"Shishir Viriyala" <ShishirViriyala@.discussions.microsoft.com> wrote in
message news:0B400C78-8FD5-416B-AE77-6CFD75CAFEB9@.microsoft.com...
Thank you. I think I will have to go the route of manually dropping and
transferring tables to manage used space. Appreciate the help!
"Jeffrey K. Ericson" wrote:

> I found that this was a process that took several weeks to drop the main
file
> down. Beyond that, you'd have to relocate specific tables to specific
files.
> This would include dropping and transferring data. I wasn't willing to
do
> that but this was also a decision support system where all the data came
at[vbcol=seagreen]
> night from upstream systems.
> "Shishir Viriyala" wrote:
>
intend to[vbcol=seagreen]
already[vbcol=seagreen]
cap[vbcol=seagreen]
[vbcol=seagreen]
main file[vbcol=seagreen]
to the[vbcol=seagreen]
you want[vbcol=seagreen]
write[vbcol=seagreen]
main file[vbcol=seagreen]
the files[vbcol=seagreen]
The server[vbcol=seagreen]
5[vbcol=seagreen]
into some[vbcol=seagreen]
these[vbcol=seagreen]
created, the[vbcol=seagreen]
has[vbcol=seagreen]
the[vbcol=seagreen]
there be a[vbcol=seagreen]
of my[vbcol=seagreen]
way, I[vbcol=seagreen]

Thursday, March 22, 2012

data driven query DTS activeX script

Hi guys, im using this vb script.

'**********************************************************************
' Visual Basic Transformation Script
'************************************************************************

' Copy each source column to the destination column
Function Main()
DTSDestination("SES_STATUS") = DTSSource("Session Status")
DTSDestination("SIT_ID") = DTSSource("Year/Sitting")
DTSDestination("TUT_ID") = DTSSource("Tutor Code")
DTSDestination("SLC_ID") = DTSSource("Sub Location")
DTSDestination("SES_STREAM") = DTSSource("Stream")
DTSDestination("SES_TYPE") = DTSSource("Subject Type")
DTSDestination("SES_DATE") = DTSSource("Start Date")
DTSDestination("SUB_ID") = DTSSource("Subject Code")
DTSDestination("SES_ID") = DTSSource("Start Date")
Main = DTSTransformstat_InsertQuery
End Function

what I would Like to know is how can I complete this script so that it knows when to run an update statement and when to run a insert statement (I have already created the update and insert statements)

as this dts package will run every night and the source database tables are updated and inserted into quite regualar. So something like this is needed:

if record exists in destination
do update statement

if record not exist
do insert startement

many thanks in advance.

As far as I remember this is / was not possible. You should use an executeSQL tasks to UPDATE first the rows which are existing in both tables (Using linked servers or OPENDATASOURCE) and then Insert the remaining ones which are not in the destination table yet.

Jens K. Suessmeyer.

http://www.sqlserver2005.de

Thursday, March 8, 2012

Data binding problem

Hi Guys,

I am having a little problem with databinding with multiple tables. I have two tables in my db -

Categories

-

CategoryID CategoryName

SubCategories

SCategoryID SCategoryName CategoryID

I have a stored procedure to select SCategoryID, SCategoryName, CategoryName

And I have a form with 2 textboxes, a combobox, and a grid control. I load the resultset of the stored procedure in a dataset, and bind the grid and textboxes (SCategoryID, SCAtegoryName) with the dataset. I fill the combobox with the categories table and set the DisplayMember = CategoryName, ValueMember = CategoryID.

Now the problem is, I can't bind the combobox to the subcategories so that when I select a record in the grid, the combobox changes accordingly.

How can I achieve this?

Regards

Kapalic

I'm assuming you're developing a Windows application and that you are handling the SelectionChanged event of the data grid to load the form with the data of the selected item. You should have already loaded the data grid with the sub categories and loaded the combo box with the categories in the form load method. To display the category of the selected sub category, you should set the "SelectedValue" of the combo box to the category ID of the selected item.

If this doesn't answer your question, please post your code so that I can check it for you.

Best regards,

Sami Samir

|||

Dear Sami

Yet again, that solves my problem! I really don't know how to thank you!!

Yes, the scenario you tell is all right. I loaded a combobox with categories, and the grid with Sub Category ID, Name, and Category Name. Problem I binding the combo with the grid to reflect the change when selection changes.

I was handling it in the CellClick event and KeyDown event of the property to reflect the change in the combo manually, not being succed to bind it properly. And problem hapening. When I click the cell, combo changes successfully. But when I use the arrows, it was not showing proper value. Sometime showing the current cell value, sometime the previous cell value.

Here is the code I use in two events -

Code Snippet

With dgvResults
cboCategory.SelectedIndex = cboCategory.FindString(.CurrentRow.Cells(2).Value.ToString)
End With

But now I use it only in the SelectionChanged event, and it works nicely. Really thanking you from my heart. Please do me another favour by telling that is it a proper method to show categories, or there is any other easy method using the DataBinding method.

Regards

Kapalic

|||

Your code is fine. I would prefer to rely on the DataBoundItem value of the CurrentRow to get the selected category name. Here is some sample code:

Code Snippet

Dim dr As DataRowView

dr = dgvResults.CurrentRow.DataBoundItem

cboCategory.SelectedIndex = cboCategory.FindString(dr("CategoryName").ToString)

The benefit of this is to avoid the issues related to accessing the value of the cells directly because you might add new columns so your code will have to be changed. Better still, you should use the SelectedValue instead of the SelectedIndex but you will have to use the category ID not the name. In this case, you will have to include the category ID in your query for loading the grid but you shouldn't display it. You will have to handle the grid so that it only displays the fields you want.

Best regards,

Sami Samir

|||

Dear Sami

The second approach you suggest is better and easy. I've included the CategoryID in the query, and loaded in the grid. But problem now is can't hide the column! How to achieve this?

I use dgv.columns(3).Visible = False, but it produce an error!

Need another bit of help!

Kapalic

|||

I tried it and it worked. I'm not sure what the problem is. Your query for lading the datagrid should have 4 fields (SCategoryID, SCategoryName, CategoryID, CategoryName). Without attempting to hide the columns, the grid will display the 4 columns.

After making sure that the grid displays the 4 columns, add the line that hides the column, after completing all the code that loads the grid. It's still better to reference the column by its name and not its index.

Code Snippet

dgv.Columns("CategoryID").Visible = False

I'm not sure if I helped. If not, please post your code.

Best regards,

Sami Samir

|||

Dear Sami

Thank you very much for your reply. Surprisingly my previously posted code now working correctly. Maybe the problem was with the loading data or something else.

I want to really thank you again, because I've learned some very exclusive techniques which I did not know. It filled some big blanc space s in my knowledge which I required to work with relational database and ado.net. I am currently working on an inventory software which is my first project after learning vb2005. I would like to acknowledge to you in my software. I also would like to make friendship with you if possible.

Best Regards!

Kapalic

|||

Thanks for your kind words. This is the whole purpose of the forums which is helping others in overcoming any problems they might encounter. I am glad that I was able to assist you

Cheers,

Sami

Wednesday, March 7, 2012

data archiving and replication

Hi Guys,
i am using sql 2000 transactional replication to have
reporting server.
in the oltp server tables i want to keep only one years
data (around 1o million) while in the reporting server i
will export the archived data and keep all the data so
that i can take reportfor any period.
will this make any problem for replication.like
syncronisation or anything?
pls advice me.
Right click on your publication select publication properties, go to the
articles tab, click on command. Change your delete command to NONE on your
publisher.
"biju george" <bijupg@.hotmail.com> wrote in message
news:11d3a01c410d3$b68b68b0$a401280a@.phx.gbl...
> Hi Guys,
> i am using sql 2000 transactional replication to have
> reporting server.
> in the oltp server tables i want to keep only one years
> data (around 1o million) while in the reporting server i
> will export the archived data and keep all the data so
> that i can take reportfor any period.
> will this make any problem for replication.like
> syncronisation or anything?
> pls advice me.
|||Biju,
the way you implement this depends on what type of replication you're
considering. For snapshot replication ,you could use static filters. Each
time you want to archive off some data, you modify the filter clause. Be
sure to modify the article properties, naming conflicts to "Delete data in
existing table that matches the row filter statement." - before the
synchronization, there won't be any matching data as it only exists on the
publisher. After that you remove the data on the publisher. For
transactional you could set it up as per usual and just select not to
replicate delete commands - all new records will go to the subscriber and on
the publisher old records can be deleted with this not affecting the
subscriber. Finally, just to give you some other options, you could archive
the data to another table on the publisher. On the publisher create a view
which unions the data, and then create an index on the view and replicate
the indexed view. Actually this would probably be my choice, as it maintains
the source data on the publisher.
HTH,
Paul Ibison
|||i am using transactional replication. i deleted old data
from publisher and i copied the archived data in another
table on publisher . now the data on the publisher and
subsriber are SAME,
but for reporting to get all the data i will export the
old data From the archived table at publisher to the
orginal table at subscriber using DTS.
using the indexed view will be slower in report generation
is'nt it?
>--Original Message--
>Biju,
>the way you implement this depends on what type of
replication you're
>considering. For snapshot replication ,you could use
static filters. Each
>time you want to archive off some data, you modify the
filter clause. Be
>sure to modify the article properties, naming conflicts
to "Delete data in
>existing table that matches the row filter statement." -
before the
>synchronization, there won't be any matching data as it
only exists on the
>publisher. After that you remove the data on the
publisher. For
>transactional you could set it up as per usual and just
select not to
>replicate delete commands - all new records will go to
the subscriber and on
>the publisher old records can be deleted with this not
affecting the
>subscriber. Finally, just to give you some other options,
you could archive
>the data to another table on the publisher. On the
publisher create a view
>which unions the data, and then create an index on the
view and replicate
>the indexed view. Actually this would probably be my
choice, as it maintains
>the source data on the publisher.
>HTH,
>Paul Ibison
>
>.
>
|||Hilary's reply is the way to go to continuously keep the reporting server
upto date. For his reply to work you'll need all the data in one table on
the publisher initially, then remove the 'archived' rows. If you are to
continue partitioning the data on the publisher, then you can use my
snapshot advice to append the data - no need for DTS, although it is
functionally the same methodology here. My suggestion for indexed views also
works. It wouldn't be slower for reporting, because the indexed view
amalgamates the data on the publisher and replicates it to one table on the
subscriber. However, changes to data on the publisher will be fractionally
slower as the indexed view will get updated behind the scenes.
Regards,
Paul Ibison
|||for this i have to create indexed view or only replicate
the view?
will it give more load to the publisher.?
>--Original Message--
>Hilary's reply is the way to go to continuously keep the
reporting server
>upto date. For his reply to work you'll need all the data
in one table on
>the publisher initially, then remove the 'archived' rows.
If you are to
>continue partitioning the data on the publisher, then you
can use my
>snapshot advice to append the data - no need for DTS,
although it is
>functionally the same methodology here. My suggestion for
indexed views also
>works. It wouldn't be slower for reporting, because the
indexed view
>amalgamates the data on the publisher and replicates it
to one table on the
>subscriber. However, changes to data on the publisher
will be fractionally
>slower as the indexed view will get updated behind the
scenes.
>Regards,
>Paul Ibison
>
>.
>
|||Biju,
actually I just tried this out for you and unfortunately the restrictions on indexed views are too severe for this to be a viable solution - sorry. So, that leaves you with Hilary's advice in which case you need to initially amalgamate the data on the pub
lisher, create the publication, synchronize then delete the archived data on the publisher. Or, use a linked server or snapshot replication. Also, if you want to keep the tables separate, you could use replication to copy the data to separate tables on th
e subscriber (with different names) and have a view there to amalgamate the data.
Regards,
Paul Ibison
|||already i have the archived data in a seperate table at
publisher.so i will export this data to the orginal table
at subscriber so that report will be generated from table
having complete table.
will this idea give any problem in future ?from my testing
it works fine.
pls advice me.
>--Original Message--
>Biju,
>actually I just tried this out for you and unfortunately
the restrictions on indexed views are too severe for this
to be a viable solution - sorry. So, that leaves you with
Hilary's advice in which case you need to initially
amalgamate the data on the publisher, create the
publication, synchronize then delete the archived data on
the publisher. Or, use a linked server or snapshot
replication. Also, if you want to keep the tables
separate, you could use replication to copy the data to
separate tables on the subscriber (with different names)
and have a view there to amalgamate the data.
>Regards,
>Paul Ibison
>.
>
|||Biju,
if you want to be able to report on ALL your data at the subscriber, you initially need to amalgamate the records from 2 tables - your live table and your archive table - on the publisher into one on the subscriber. The easiest way to do this is to first
do it manually (eg using linked servers). Then apply Hilary's solution to the production table, stating that the schema already exists on the subscriber and making sure to not replicate deletes.
Regards,
Paul Ibison
|||Paul,
If I understand properly, there are actually two types of deletes that would
occur:
1. Deletes relating to normal transaction management.
2. Deletes relating to archiving data. Here data is copied to the archive
table and deleted from the live table.
If deletes are not allowed to happen at the subscriber then it affects both
cases of deletes. What should actually happen is that deletes for case 1
should be replicated but not those for case 2. This is because if we did not
allow case 1 deletes to replicate to subscriber, the latter would contain
data that does not exist in publisher anymore, with all the consequences
thereof (wrong reporting at subscriber?).
I think the other solutions suggested should work, namely:
1. Publisher has tables Live and Archive.
2. Subscriber has same tables and each table gets data from the
corresponding tables from Publisher. Synchronisation should happen normally
for both tables.
3. Create a normal view on subscriber with UNION of data from both tables.
Use this view for reporting purposes.
Hoping I am not shooting off-target!
Raj Moloye
Mauritius

Saturday, February 25, 2012

Data Access Error - ConnectionString property

Hi guys,

I'm getting this error:

System.InvalidOperationException: The ConnectionString property has not been initialized

when I'm trying to connect to an SQL database using the following code:

Dim mySelectQuery As String = "SELECT [tabl_id] FROM [myTable]"
Dim myConnection As New SqlConnection(System.Configuration.ConfigurationManager.AppSettings("myConnString"))
Dim myCommand As New SqlCommand(mySelectQuery, myConnection)
myConnection.Open()
Dim myReader As SqlDataReader
myReader = myCommand.ExecuteReader()
While myReader.Read()
Response.Write(myReader("tabl_id") & "<br/>")
End While
myConnection.Close()

and in web.config:

<appSettings/>
<connectionStrings>
<add name="myConnString" connectionString="Data Source=100.100.100.100;Initial Catalog=myDatabase;User ID=me;Password=password" providerName="System.Data.SqlClient"/>
</connectionStrings>

If I place the actual connection string inside instead of trying to reference the web.config setting it works, but using 'System.Configuration.ConfigurationManager.AppSettings' returns the error. Is there something I'm doing wrong?

Ferox:

Dim myConnection As New SqlConnection(System.Configuration.ConfigurationManager.AppSettings("myConnString"))
and in web.config:

<appSettings/>
<connectionStrings>
<add name="myConnString" connectionString="Data Source=100.100.100.100;Initial Catalog=myDatabase;User ID=me;Password=password" providerName="System.Data.SqlClient"/>
</connectionStrings>

hi there,

the connection stirng element in your webconfig is actually not a child node of the appSettings but its on the same level instead. So maybe you wanna get the ConnectionStrings node instead of the AppSettings node out of the web.config.

System.Configuration.ConfigurationManager.ConnectionStrings("myConnString").ConnectionString

see if the above line works.

|||

Thanks a lot, that works perfectly! :)