Showing posts with label rows. Show all posts
Showing posts with label rows. Show all posts

Thursday, March 29, 2012

Data File shrink

I have a data file in which there is a table containing 5,000,000 rows. I
deleted 2,500,000 rows, leaving 2,500,000 rows in the table. The table has
a clustered primary key index, and the table also has a text column. The
only other tables in the database are small lookup tables.
The datafile size before the delete was 75GB. After shrinking the datafile
(with target size 25000MB), the size decreased to 68GB.
The results of the shrinkfile were the following:
currentsize 8591936
minimumsize 128
usedpages 8206784
estimatedpages 8206784
Is there anything short of recreating a new table using select into, or
bcp, that might reclaim space?
--
Message posted via http://www.sqlmonster.comThe space used by the deleted text cannot be reclaimed in SQL Server 2000.
The (clumsy) solution is to bcp-out then in. In SQL Server 2005 we've fixed
this by adding LOB compaction to the shrink and defrag operations.
Regards
--
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"Robert Richards via SQLMonster.com" <forum@.SQLMonster.com> wrote in message
news:127b65beb25d4ccab668ff9ff8df8322@.SQLMonster.com...
> I have a data file in which there is a table containing 5,000,000 rows. I
> deleted 2,500,000 rows, leaving 2,500,000 rows in the table. The table has
> a clustered primary key index, and the table also has a text column. The
> only other tables in the database are small lookup tables.
> The datafile size before the delete was 75GB. After shrinking the datafile
> (with target size 25000MB), the size decreased to 68GB.
> The results of the shrinkfile were the following:
> currentsize 8591936
> minimumsize 128
> usedpages 8206784
> estimatedpages 8206784
> Is there anything short of recreating a new table using select into, or
> bcp, that might reclaim space?
> --
> Message posted via http://www.sqlmonster.com|||It is most likely due to the text columns. Reindexing and shrinking does
little or nothing to help reduce wasted space in them. BCP out, truncate
and BCP back in is the cleanest method. SQL2005 has features to address
this but not 2000.
--
Andrew J. Kelly SQL MVP
"Robert Richards via SQLMonster.com" <forum@.SQLMonster.com> wrote in message
news:127b65beb25d4ccab668ff9ff8df8322@.SQLMonster.com...
>I have a data file in which there is a table containing 5,000,000 rows. I
> deleted 2,500,000 rows, leaving 2,500,000 rows in the table. The table has
> a clustered primary key index, and the table also has a text column. The
> only other tables in the database are small lookup tables.
> The datafile size before the delete was 75GB. After shrinking the datafile
> (with target size 25000MB), the size decreased to 68GB.
> The results of the shrinkfile were the following:
> currentsize 8591936
> minimumsize 128
> usedpages 8206784
> estimatedpages 8206784
> Is there anything short of recreating a new table using select into, or
> bcp, that might reclaim space?
> --
> Message posted via http://www.sqlmonster.com|||When you say truncate, is that the same as:
delete [tablename]?
--
Message posted via http://www.sqlmonster.com|||Well not exactly. A truncate is much faster than a Delete since it simply
deallocates all the pages associated with the table and it's indexes. There
is minimal logging to the transaction log. A Delete with no where clause
will log each and every row to the log and will be much slower on a larger
system. However there are some caveats to truncate. One is that you can
not issue this on a table with RI unless you drop that first. Two it does
not update the statistics but the idea is you will BCP it right back in
anyway.
--
Andrew J. Kelly SQL MVP
"Robert Richards via SQLMonster.com" <forum@.SQLMonster.com> wrote in message
news:ff665e017d394681a275d526eb8acf92@.SQLMonster.com...
> When you say truncate, is that the same as:
> delete [tablename]?
> --
> Message posted via http://www.sqlmonster.com|||There is an option to reorganize data pages in the maintenance plan wizard.
Does this reclaim any space when I have already performed a shrinkfile
operation?
--
Message posted via http://www.sqlmonster.com|||No - it will take up more space. Under the covers it rebuilds the indexes
using DBCC DBREINDEX. Each index rebuild needs 1.2x the size of the index in
extra space. If you shrink then reorganize, your database will grow again.
If you reorganize then shrink, shrink will fragment the indexes again. Also,
reorganize doesn't touch text pages at all in SQL Server 2000.
There's no way in SQL Server 2000 to get the space back from the deleted
text rows without bcp out/in.
--
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"Robert Richards via SQLMonster.com" <forum@.SQLMonster.com> wrote in message
news:f3800333aebc4d99bcd34685895ed4cd@.SQLMonster.com...
> There is an option to reorganize data pages in the maintenance plan
wizard.
> Does this reclaim any space when I have already performed a shrinkfile
> operation?
> --
> Message posted via http://www.sqlmonster.com|||The Reorganize part of the MP is a DBREINDEX. A DBREINDEX does not reclaim
any space per say. It may free up a few pages based on the fill factor
depending on how full they were before the reindexing. There is a different
option in the wizard to free up space and will essentially do a DBCC
SHRINKFILE. It should be turned off as per the other conversations we had
regarding this subject.
--
Andrew J. Kelly SQL MVP
"Robert Richards via SQLMonster.com" <forum@.SQLMonster.com> wrote in message
news:f3800333aebc4d99bcd34685895ed4cd@.SQLMonster.com...
> There is an option to reorganize data pages in the maintenance plan
> wizard.
> Does this reclaim any space when I have already performed a shrinkfile
> operation?
> --
> Message posted via http://www.sqlmonster.com|||So if I started at 5 million records and 75GB, deleted 2.5 million and now
have 68GB, BCP export, truncate the table, BCP import back into the
truncated table, do I then run a shrinkfile again to further reduce the
68GB file? I do not believe that bcp will shrink the file, so do I perform
dbcc shrinkfile, or is there a better way once the import is completed?
--
Message posted via http://www.sqlmonster.com|||You would do it in this order:
BCP out the table
Truncate the table
DBCC SHRINKFILE
BCP back into the table
But you have to be careful that you don't shrink the file too much or it
will simply grow again when you BCP the data back in. Why do you want to
shrink it at all? If there is any chance at all the data will get that
large again it is best to leave it at that size. You also need room for
reindexing.
--
Andrew J. Kelly SQL MVP
"Robert Richards via SQLMonster.com" <forum@.SQLMonster.com> wrote in message
news:d29d78ea6b5a4cab9e2566bb93b3bf97@.SQLMonster.com...
> So if I started at 5 million records and 75GB, deleted 2.5 million and now
> have 68GB, BCP export, truncate the table, BCP import back into the
> truncated table, do I then run a shrinkfile again to further reduce the
> 68GB file? I do not believe that bcp will shrink the file, so do I perform
> dbcc shrinkfile, or is there a better way once the import is completed?
> --
> Message posted via http://www.sqlmonster.com|||This is an archive table, that has essentially served its purpose. We are
no longer archiving to it, but instead only deletes will be performed
against it in the future. No further growth anticipated, just the opposite,
a gradual reduction in size.
If I remove the indexes before performing the import, and given that I
allow enough space in the shrunken data file to recreate the indexes, does
this process of recreating the indexes after the import (having a clustered
primary key identity index) cause any fragmentation issues, versus leaving
the indexes in place during the import?
--
Message posted via http://www.sqlmonster.com|||You will need at least 1.2 times the size of the table (in the case of the
clustered index) and for each non-clustered index in free space for this.
If you only have a small amount above that free the chances are the indexes
will not be contiguous in the file but will still be built. The more free
space you have the better chance that the indexes can be placed in a
contiguous area in the file when rebuilt. A little bit of fragmentation
won't hurt.
--
Andrew J. Kelly SQL MVP
"Robert Richards via SQLMonster.com" <forum@.SQLMonster.com> wrote in message
news:fb453ef7c78943ce9b5af4e75cfb224e@.SQLMonster.com...
> This is an archive table, that has essentially served its purpose. We are
> no longer archiving to it, but instead only deletes will be performed
> against it in the future. No further growth anticipated, just the
> opposite,
> a gradual reduction in size.
> If I remove the indexes before performing the import, and given that I
> allow enough space in the shrunken data file to recreate the indexes, does
> this process of recreating the indexes after the import (having a
> clustered
> primary key identity index) cause any fragmentation issues, versus leaving
> the indexes in place during the import?
> --
> Message posted via http://www.sqlmonster.comsql

Data File shrink

I have a data file in which there is a table containing 5,000,000 rows. I
deleted 2,500,000 rows, leaving 2,500,000 rows in the table. The table has
a clustered primary key index, and the table also has a text column. The
only other tables in the database are small lookup tables.
The datafile size before the delete was 75GB. After shrinking the datafile
(with target size 25000MB), the size decreased to 68GB.
The results of the shrinkfile were the following:
currentsize8591936
minimumsize128
usedpages8206784
estimatedpages8206784
Is there anything short of recreating a new table using select into, or
bcp, that might reclaim space?
Message posted via http://www.sqlmonster.com
The space used by the deleted text cannot be reclaimed in SQL Server 2000.
The (clumsy) solution is to bcp-out then in. In SQL Server 2005 we've fixed
this by adding LOB compaction to the shrink and defrag operations.
Regards
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"Robert Richards via SQLMonster.com" <forum@.SQLMonster.com> wrote in message
news:127b65beb25d4ccab668ff9ff8df8322@.SQLMonster.c om...
> I have a data file in which there is a table containing 5,000,000 rows. I
> deleted 2,500,000 rows, leaving 2,500,000 rows in the table. The table has
> a clustered primary key index, and the table also has a text column. The
> only other tables in the database are small lookup tables.
> The datafile size before the delete was 75GB. After shrinking the datafile
> (with target size 25000MB), the size decreased to 68GB.
> The results of the shrinkfile were the following:
> currentsize 8591936
> minimumsize 128
> usedpages 8206784
> estimatedpages 8206784
> Is there anything short of recreating a new table using select into, or
> bcp, that might reclaim space?
> --
> Message posted via http://www.sqlmonster.com
|||It is most likely due to the text columns. Reindexing and shrinking does
little or nothing to help reduce wasted space in them. BCP out, truncate
and BCP back in is the cleanest method. SQL2005 has features to address
this but not 2000.
Andrew J. Kelly SQL MVP
"Robert Richards via SQLMonster.com" <forum@.SQLMonster.com> wrote in message
news:127b65beb25d4ccab668ff9ff8df8322@.SQLMonster.c om...
>I have a data file in which there is a table containing 5,000,000 rows. I
> deleted 2,500,000 rows, leaving 2,500,000 rows in the table. The table has
> a clustered primary key index, and the table also has a text column. The
> only other tables in the database are small lookup tables.
> The datafile size before the delete was 75GB. After shrinking the datafile
> (with target size 25000MB), the size decreased to 68GB.
> The results of the shrinkfile were the following:
> currentsize 8591936
> minimumsize 128
> usedpages 8206784
> estimatedpages 8206784
> Is there anything short of recreating a new table using select into, or
> bcp, that might reclaim space?
> --
> Message posted via http://www.sqlmonster.com
|||When you say truncate, is that the same as:
delete [tablename]?
Message posted via http://www.sqlmonster.com
|||Well not exactly. A truncate is much faster than a Delete since it simply
deallocates all the pages associated with the table and it's indexes. There
is minimal logging to the transaction log. A Delete with no where clause
will log each and every row to the log and will be much slower on a larger
system. However there are some caveats to truncate. One is that you can
not issue this on a table with RI unless you drop that first. Two it does
not update the statistics but the idea is you will BCP it right back in
anyway.
Andrew J. Kelly SQL MVP
"Robert Richards via SQLMonster.com" <forum@.SQLMonster.com> wrote in message
news:ff665e017d394681a275d526eb8acf92@.SQLMonster.c om...
> When you say truncate, is that the same as:
> delete [tablename]?
> --
> Message posted via http://www.sqlmonster.com
|||There is an option to reorganize data pages in the maintenance plan wizard.
Does this reclaim any space when I have already performed a shrinkfile
operation?
Message posted via http://www.sqlmonster.com
|||No - it will take up more space. Under the covers it rebuilds the indexes
using DBCC DBREINDEX. Each index rebuild needs 1.2x the size of the index in
extra space. If you shrink then reorganize, your database will grow again.
If you reorganize then shrink, shrink will fragment the indexes again. Also,
reorganize doesn't touch text pages at all in SQL Server 2000.
There's no way in SQL Server 2000 to get the space back from the deleted
text rows without bcp out/in.
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"Robert Richards via SQLMonster.com" <forum@.SQLMonster.com> wrote in message
news:f3800333aebc4d99bcd34685895ed4cd@.SQLMonster.c om...
> There is an option to reorganize data pages in the maintenance plan
wizard.
> Does this reclaim any space when I have already performed a shrinkfile
> operation?
> --
> Message posted via http://www.sqlmonster.com
|||The Reorganize part of the MP is a DBREINDEX. A DBREINDEX does not reclaim
any space per say. It may free up a few pages based on the fill factor
depending on how full they were before the reindexing. There is a different
option in the wizard to free up space and will essentially do a DBCC
SHRINKFILE. It should be turned off as per the other conversations we had
regarding this subject.
Andrew J. Kelly SQL MVP
"Robert Richards via SQLMonster.com" <forum@.SQLMonster.com> wrote in message
news:f3800333aebc4d99bcd34685895ed4cd@.SQLMonster.c om...
> There is an option to reorganize data pages in the maintenance plan
> wizard.
> Does this reclaim any space when I have already performed a shrinkfile
> operation?
> --
> Message posted via http://www.sqlmonster.com
|||So if I started at 5 million records and 75GB, deleted 2.5 million and now
have 68GB, BCP export, truncate the table, BCP import back into the
truncated table, do I then run a shrinkfile again to further reduce the
68GB file? I do not believe that bcp will shrink the file, so do I perform
dbcc shrinkfile, or is there a better way once the import is completed?
Message posted via http://www.sqlmonster.com
|||You would do it in this order:
BCP out the table
Truncate the table
DBCC SHRINKFILE
BCP back into the table
But you have to be careful that you don't shrink the file too much or it
will simply grow again when you BCP the data back in. Why do you want to
shrink it at all? If there is any chance at all the data will get that
large again it is best to leave it at that size. You also need room for
reindexing.
Andrew J. Kelly SQL MVP
"Robert Richards via SQLMonster.com" <forum@.SQLMonster.com> wrote in message
news:d29d78ea6b5a4cab9e2566bb93b3bf97@.SQLMonster.c om...
> So if I started at 5 million records and 75GB, deleted 2.5 million and now
> have 68GB, BCP export, truncate the table, BCP import back into the
> truncated table, do I then run a shrinkfile again to further reduce the
> 68GB file? I do not believe that bcp will shrink the file, so do I perform
> dbcc shrinkfile, or is there a better way once the import is completed?
> --
> Message posted via http://www.sqlmonster.com

Data File shrink

I have a data file in which there is a table containing 5,000,000 rows. I
deleted 2,500,000 rows, leaving 2,500,000 rows in the table. The table has
a clustered primary key index, and the table also has a text column. The
only other tables in the database are small lookup tables.
The datafile size before the delete was 75GB. After shrinking the datafile
(with target size 25000MB), the size decreased to 68GB.
The results of the shrinkfile were the following:
currentsize 8591936
minimumsize 128
usedpages 8206784
estimatedpages 8206784
Is there anything short of recreating a new table using select into, or
bcp, that might reclaim space?
Message posted via http://www.droptable.comThe space used by the deleted text cannot be reclaimed in SQL Server 2000.
The (clumsy) solution is to bcp-out then in. In SQL Server 2005 we've fixed
this by adding LOB compaction to the shrink and defrag operations.
Regards
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"Robert Richards via droptable.com" <forum@.droptable.com> wrote in message
news:127b65beb25d4ccab668ff9ff8df8322@.SQ
droptable.com...
> I have a data file in which there is a table containing 5,000,000 rows. I
> deleted 2,500,000 rows, leaving 2,500,000 rows in the table. The table has
> a clustered primary key index, and the table also has a text column. The
> only other tables in the database are small lookup tables.
> The datafile size before the delete was 75GB. After shrinking the datafile
> (with target size 25000MB), the size decreased to 68GB.
> The results of the shrinkfile were the following:
> currentsize 8591936
> minimumsize 128
> usedpages 8206784
> estimatedpages 8206784
> Is there anything short of recreating a new table using select into, or
> bcp, that might reclaim space?
> --
> Message posted via http://www.droptable.com|||It is most likely due to the text columns. Reindexing and shrinking does
little or nothing to help reduce wasted space in them. BCP out, truncate
and BCP back in is the cleanest method. SQL2005 has features to address
this but not 2000.
Andrew J. Kelly SQL MVP
"Robert Richards via droptable.com" <forum@.droptable.com> wrote in message
news:127b65beb25d4ccab668ff9ff8df8322@.SQ
droptable.com...
>I have a data file in which there is a table containing 5,000,000 rows. I
> deleted 2,500,000 rows, leaving 2,500,000 rows in the table. The table has
> a clustered primary key index, and the table also has a text column. The
> only other tables in the database are small lookup tables.
> The datafile size before the delete was 75GB. After shrinking the datafile
> (with target size 25000MB), the size decreased to 68GB.
> The results of the shrinkfile were the following:
> currentsize 8591936
> minimumsize 128
> usedpages 8206784
> estimatedpages 8206784
> Is there anything short of recreating a new table using select into, or
> bcp, that might reclaim space?
> --
> Message posted via http://www.droptable.com|||When you say truncate, is that the same as:
delete [tablename]?
Message posted via http://www.droptable.com|||Well not exactly. A truncate is much faster than a Delete since it simply
deallocates all the pages associated with the table and it's indexes. There
is minimal logging to the transaction log. A Delete with no where clause
will log each and every row to the log and will be much slower on a larger
system. However there are some caveats to truncate. One is that you can
not issue this on a table with RI unless you drop that first. Two it does
not update the statistics but the idea is you will BCP it right back in
anyway.
Andrew J. Kelly SQL MVP
"Robert Richards via droptable.com" <forum@.droptable.com> wrote in message
news:ff665e017d394681a275d526eb8acf92@.SQ
droptable.com...
> When you say truncate, is that the same as:
> delete [tablename]?
> --
> Message posted via http://www.droptable.com|||There is an option to reorganize data pages in the maintenance plan wizard.
Does this reclaim any space when I have already performed a shrinkfile
operation?
Message posted via http://www.droptable.com|||No - it will take up more space. Under the covers it rebuilds the indexes
using DBCC DBREINDEX. Each index rebuild needs 1.2x the size of the index in
extra space. If you shrink then reorganize, your database will grow again.
If you reorganize then shrink, shrink will fragment the indexes again. Also,
reorganize doesn't touch text pages at all in SQL Server 2000.
There's no way in SQL Server 2000 to get the space back from the deleted
text rows without bcp out/in.
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"Robert Richards via droptable.com" <forum@.droptable.com> wrote in message
news:f3800333aebc4d99bcd34685895ed4cd@.SQ
droptable.com...
> There is an option to reorganize data pages in the maintenance plan
wizard.
> Does this reclaim any space when I have already performed a shrinkfile
> operation?
> --
> Message posted via http://www.droptable.com|||The Reorganize part of the MP is a DBREINDEX. A DBREINDEX does not reclaim
any space per say. It may free up a few pages based on the fill factor
depending on how full they were before the reindexing. There is a different
option in the wizard to free up space and will essentially do a DBCC
SHRINKFILE. It should be turned off as per the other conversations we had
regarding this subject.
Andrew J. Kelly SQL MVP
"Robert Richards via droptable.com" <forum@.droptable.com> wrote in message
news:f3800333aebc4d99bcd34685895ed4cd@.SQ
droptable.com...
> There is an option to reorganize data pages in the maintenance plan
> wizard.
> Does this reclaim any space when I have already performed a shrinkfile
> operation?
> --
> Message posted via http://www.droptable.com|||So if I started at 5 million records and 75GB, deleted 2.5 million and now
have 68GB, BCP export, truncate the table, BCP import back into the
truncated table, do I then run a shrinkfile again to further reduce the
68GB file? I do not believe that bcp will shrink the file, so do I perform
dbcc shrinkfile, or is there a better way once the import is completed?
Message posted via http://www.droptable.com|||You would do it in this order:
BCP out the table
Truncate the table
DBCC SHRINKFILE
BCP back into the table
But you have to be careful that you don't shrink the file too much or it
will simply grow again when you BCP the data back in. Why do you want to
shrink it at all? If there is any chance at all the data will get that
large again it is best to leave it at that size. You also need room for
reindexing.
Andrew J. Kelly SQL MVP
"Robert Richards via droptable.com" <forum@.droptable.com> wrote in message
news:d29d78ea6b5a4cab9e2566bb93b3bf97@.SQ
droptable.com...
> So if I started at 5 million records and 75GB, deleted 2.5 million and now
> have 68GB, BCP export, truncate the table, BCP import back into the
> truncated table, do I then run a shrinkfile again to further reduce the
> 68GB file? I do not believe that bcp will shrink the file, so do I perform
> dbcc shrinkfile, or is there a better way once the import is completed?
> --
> Message posted via http://www.droptable.com

Tuesday, March 27, 2012

data extraction/mapping between two databases

Hi ,
just wondering, whats the easiest way to map about 10000 rows from database
a (old) to database b.
the complexity is that the structure is completely different, database b is
more normalized. has relationships maintained..where as database a has
everything in one table ..So I have to map one column (choose distinct rows)
and move to datbase b.new table...
I did try DTS for couple of them..but seems tedious and never ending..
Whats the easiest way to do this? any OR tools ' anything ...
Thanks,I'd just understand the data, and then populate tables manually, using
INSERT statements, to populate the primary key and foreign key tables.
--
HTH,
Vyas, MVP (SQL Server)
http://vyaskn.tripod.com/
"Mani" <mani@.spamthis@.evikasystems.com> wrote in message
news:eACFDzamEHA.1904@.TK2MSFTNGP09.phx.gbl...
Hi ,
just wondering, whats the easiest way to map about 10000 rows from database
a (old) to database b.
the complexity is that the structure is completely different, database b is
more normalized. has relationships maintained..where as database a has
everything in one table ..So I have to map one column (choose distinct rows)
and move to datbase b.new table...
I did try DTS for couple of them..but seems tedious and never ending..
Whats the easiest way to do this? any OR tools ' anything ...
Thanks,

data extraction/mapping between two databases

Hi ,
just wondering, whats the easiest way to map about 10000 rows from database
a (old) to database b.
the complexity is that the structure is completely different, database b is
more normalized. has relationships maintained..where as database a has
everything in one table ..So I have to map one column (choose distinct rows)
and move to datbase b.new table...
I did try DTS for couple of them..but seems tedious and never ending..
Whats the easiest way to do this? any OR tools ? anything ...
Thanks,
I'd just understand the data, and then populate tables manually, using
INSERT statements, to populate the primary key and foreign key tables.
HTH,
Vyas, MVP (SQL Server)
http://vyaskn.tripod.com/
"Mani" <mani@.spamthis@.evikasystems.com> wrote in message
news:eACFDzamEHA.1904@.TK2MSFTNGP09.phx.gbl...
Hi ,
just wondering, whats the easiest way to map about 10000 rows from database
a (old) to database b.
the complexity is that the structure is completely different, database b is
more normalized. has relationships maintained..where as database a has
everything in one table ..So I have to map one column (choose distinct rows)
and move to datbase b.new table...
I did try DTS for couple of them..but seems tedious and never ending..
Whats the easiest way to do this? any OR tools ? anything ...
Thanks,

Sunday, March 25, 2012

data export to CSV

I have a report that produces over 3 million rows, this report is
export to CSV and is scheduled to run weekly.
Sometimes the report is needed to run manually. Initially this report
timed out, now the report runs but takes a very long time to complete.
If the stored procedure is run directly it takes about 5 mins to
execute. Running in reporting services takes upward of an hour.
Does anyone have suggestions on how to speed up this report.
my initial thoughts would be to use SSRS to populate the parameters
and maybe hook into the reporting engine and execute the stored
procedure directly with the parameters defined
thanks in advance
IanHi, how run this report? http? web service? how?
--
----
Microsoft M.V.P en SQLServer
SQLTotal Consulting - Servicios en SQLServer
Email: maxi.da@.gmail.com.nospam
----
<emery.ian@.googlemail.com> escribió en el mensaje
news:e977052b-81b7-404f-88e1-d14968fbe0ab@.e25g2000prg.googlegroups.com...
>I have a report that produces over 3 million rows, this report is
> export to CSV and is scheduled to run weekly.
> Sometimes the report is needed to run manually. Initially this report
> timed out, now the report runs but takes a very long time to complete.
> If the stored procedure is run directly it takes about 5 mins to
> execute. Running in reporting services takes upward of an hour.
> Does anyone have suggestions on how to speed up this report.
> my initial thoughts would be to use SSRS to populate the parameters
> and maybe hook into the reporting engine and execute the stored
> procedure directly with the parameters defined
>
> thanks in advance
> Ian|||Do not waste your time trying to run the report different ways, it will not
affect the time.
Your time is coming in two places. One, does it really only take 5 minutes
to extract 5 million records? I'm not sure how it works when executing a
stored procedure from Query Analyzer that returns 5 million records, whether
it really retrieves all the records or not. Anyway, I think that is one
area, the number of records being physically returned takes time.
Second, RS 2000 and RS 2005 does all rendering in RAM. It does not page
anything out to disk or utilize the disk at all when rendering. I am pretty
sure this is changing substantially in RS 2008. Adding more RAM to the
server will help.
The other option is to use SSIS instead.
Bruce Loehle-Conger
MVP SQL Server Reporting Services
<emery.ian@.googlemail.com> wrote in message
news:e977052b-81b7-404f-88e1-d14968fbe0ab@.e25g2000prg.googlegroups.com...
>I have a report that produces over 3 million rows, this report is
> export to CSV and is scheduled to run weekly.
> Sometimes the report is needed to run manually. Initially this report
> timed out, now the report runs but takes a very long time to complete.
> If the stored procedure is run directly it takes about 5 mins to
> execute. Running in reporting services takes upward of an hour.
> Does anyone have suggestions on how to speed up this report.
> my initial thoughts would be to use SSRS to populate the parameters
> and maybe hook into the reporting engine and execute the stored
> procedure directly with the parameters defined
>
> thanks in advance
> Ian|||An additional point, BCP is the tool to use (not SSIS and not RS) if
performance is your goal. Give a DBA your query used by the report and they
could have BCP written and scripted for you in no time. It's been over a
decade since I used BCP and I had forgotten about it.
Bruce Loehle-Conger
MVP SQL Server Reporting Services
"Bruce L-C [MVP]" <bruce_lcNOSPAM@.hotmail.com> wrote in message
news:eES$LTZPIHA.4272@.TK2MSFTNGP06.phx.gbl...
> Do not waste your time trying to run the report different ways, it will
> not affect the time.
> Your time is coming in two places. One, does it really only take 5 minutes
> to extract 5 million records? I'm not sure how it works when executing a
> stored procedure from Query Analyzer that returns 5 million records,
> whether it really retrieves all the records or not. Anyway, I think that
> is one area, the number of records being physically returned takes time.
> Second, RS 2000 and RS 2005 does all rendering in RAM. It does not page
> anything out to disk or utilize the disk at all when rendering. I am
> pretty sure this is changing substantially in RS 2008. Adding more RAM to
> the server will help.
> The other option is to use SSIS instead.
>
> --
> Bruce Loehle-Conger
> MVP SQL Server Reporting Services
> <emery.ian@.googlemail.com> wrote in message
> news:e977052b-81b7-404f-88e1-d14968fbe0ab@.e25g2000prg.googlegroups.com...
>>I have a report that produces over 3 million rows, this report is
>> export to CSV and is scheduled to run weekly.
>> Sometimes the report is needed to run manually. Initially this report
>> timed out, now the report runs but takes a very long time to complete.
>> If the stored procedure is run directly it takes about 5 mins to
>> execute. Running in reporting services takes upward of an hour.
>> Does anyone have suggestions on how to speed up this report.
>> my initial thoughts would be to use SSRS to populate the parameters
>> and maybe hook into the reporting engine and execute the stored
>> procedure directly with the parameters defined
>>
>> thanks in advance
>> Ian
>

Wednesday, March 21, 2012

data diff between rows

Hi All,
in my database i have counter values and datestamp:
2008-01-17 16:15:10.293 516,9989
2008-01-17 16:30:09.577 517,0241
2008-01-17 16:45:10.420 517,0357
2008-01-17 17:00:10.060 517,0602
2008-01-17 17:15:10.700 517,0728
etc.
i my report i need data differece at every hour. I have no idea how to get
it (from this hour summarised value substract last hours summarised value)
2008-01-17 16 0,0368 (=517,0357-517,0357)
2008-01-17 17 0,0371 (=517,0728-517,0357)
thanks for advise
bala nematehmmm.. no answer.
it is to hard or to simple to you?;]
bn
"bala nemate" <bala.nemateDELETETHIS@.gmail.com> wrote in message
news:uqQCMHbWIHA.4440@.TK2MSFTNGP06.phx.gbl...
> Hi All,
> in my database i have counter values and datestamp:
> 2008-01-17 16:15:10.293 516,9989
> 2008-01-17 16:30:09.577 517,0241
> 2008-01-17 16:45:10.420 517,0357
> 2008-01-17 17:00:10.060 517,0602
> 2008-01-17 17:15:10.700 517,0728
>
> etc.
> i my report i need data differece at every hour. I have no idea how to get
> it (from this hour summarised value substract last hours summarised value)
> 2008-01-17 16 0,0368 (=517,0357-517,0357)
> 2008-01-17 17 0,0371 (=517,0728-517,0357)
> thanks for advise
> bala nemate
>|||On Jan 20, 9:47=A0pm, "bala nemate" <bala.nemateDELETET...@.gmail.com>
wrote:
> hmmm.. no answer.
> it is to hard or to simple to you?;]
> bn
> "bala nemate" <bala.nemateDELETET...@.gmail.com> wrote in message
> news:uqQCMHbWIHA.4440@.TK2MSFTNGP06.phx.gbl...
>
> > Hi All,
> > in my database i have counter values and datestamp:
> > 2008-01-17 16:15:10.293 =A0 =A0 516,9989
> > 2008-01-17 16:30:09.577 =A0 =A0 517,0241
> > 2008-01-17 16:45:10.420 =A0 =A0 517,0357
> > 2008-01-17 17:00:10.060 =A0 =A0 517,0602
> > 2008-01-17 17:15:10.700 =A0 =A0 517,0728
> > etc.
> > i my report i need data differece at every hour. I have no idea how to g=et
> > it (from this hour summarised value substract last hours summarised valu=e)
> > 2008-01-17 16 =A0 =A00,0368 =A0 =A0 (=3D517,0357-517,0357)
> > 2008-01-17 17 =A0 =A00,0371 =A0 =A0 (=3D517,0728-517,0357)
> > thanks for advise
> > bala nemate- Hide quoted text -
> - Show quoted text -
First a subquery (or CTE) that shows the counter for each hour instead
of each quarter.
The join this CTE with itself to match each hour to the previous.
WITH Hours AS(
SELECT Hour =3D convert(varchar(13), TimestampColName, 120),
Counter =3D max(CounterColName),
RowNum =3D row_number() over (Order by
convert(varchar(13), TimestampColName, 120))
FROM TableName
WHERE ...
GROUP BY convert(varchar(13), TimestampColName, 120)
)
SELECT H1.Hour, Diff =3DH1.Counter-H2.Counter
FROM Hours H1
JOIN Hours H2 ON H1.RowNum =3D H2.RowNum+1|||Hi RBot,
well, i wanted to do this only on ssrs side.
on sql side i did exactly this idea you are suggesting, but in more
difficult way (well, i am not DB programer :) )
thanks man, you impoved my sql knowledge a lot ! :)
br
bala nemate
"RBot" <renato_buda@.iprimus.com.au> wrote in message
news:66c16db1-470f-4d1d-bba2-f06f9e1cc25d@.y5g2000hsf.googlegroups.com...
On Jan 20, 9:47 pm, "bala nemate" <bala.nemateDELETET...@.gmail.com>
wrote:
> hmmm.. no answer.
> it is to hard or to simple to you?;]
> bn
> "bala nemate" <bala.nemateDELETET...@.gmail.com> wrote in message
> news:uqQCMHbWIHA.4440@.TK2MSFTNGP06.phx.gbl...
>
> > Hi All,
> > in my database i have counter values and datestamp:
> > 2008-01-17 16:15:10.293 516,9989
> > 2008-01-17 16:30:09.577 517,0241
> > 2008-01-17 16:45:10.420 517,0357
> > 2008-01-17 17:00:10.060 517,0602
> > 2008-01-17 17:15:10.700 517,0728
> > etc.
> > i my report i need data differece at every hour. I have no idea how to
> > get
> > it (from this hour summarised value substract last hours summarised
> > value)
> > 2008-01-17 16 0,0368 (=517,0357-517,0357)
> > 2008-01-17 17 0,0371 (=517,0728-517,0357)
> > thanks for advise
> > bala nemate- Hide quoted text -
> - Show quoted text -
First a subquery (or CTE) that shows the counter for each hour instead
of each quarter.
The join this CTE with itself to match each hour to the previous.
WITH Hours AS(
SELECT Hour = convert(varchar(13), TimestampColName, 120),
Counter = max(CounterColName),
RowNum = row_number() over (Order by
convert(varchar(13), TimestampColName, 120))
FROM TableName
WHERE ...
GROUP BY convert(varchar(13), TimestampColName, 120)
)
SELECT H1.Hour, Diff =H1.Counter-H2.Counter
FROM Hours H1
JOIN Hours H2 ON H1.RowNum = H2.RowNum+1

Sunday, March 11, 2012

Data Conversion Error on Excel Destination

I am inserting rows using OLEDBDestination and want to redirect all error rows to EXCEL Destination.

I have used Data Conversion Transformation to Convert all strings to Unicode string fields before sending it to Excel Destination.

But its gives the following error.

[Data Conversion [16]] Error: Data conversion failed while converting column 'A' (53) to column "Copy of A" (95). The conversion returned status value 8 and status text "DBSTATUS_UNAVAILABLE".

[Data Conversion [16]] Error: The "output column "Copy of A" (95)" failed because error code 0xC020908E occurred, and the error row disposition on "output column "Copy of A" (95)" specifies failure on error. An error occurred on the specified object of the specified component.

Can someone please tell me what should I do to make it work?

Thanks,

Did you set the error output of the OLE Destination to "Redirect Rows"?

You might also have metadata problems and could maybe stand to recreate the Excel destination.|||

I have set the error ouput to redirect rows

Actually I have a Data flow task with Oledbsource -->Oledbdestination (Redirect Error Rows ) -- >Data Conversion -- >Excel Destination.

When I do this its gives me the following error.

Data conversion failed while converting column "A" (53) to column "A" (95). The conversion returned status value 8 and status text "DBSTATUS_UNAVAILABL

[Data Conversion [16]] Error: The "output column "Copy of A" (95)" failed because error code 0xC020908E occurred, and the error row disposition on "output column "Copy of A" (95)" specifies failure on error. An error occurred on the specified object of the specified component.

But for this data flow task Oledbsource -- > Data Conversion - > Excel Destination. Its works fine.

I am doing the same data conversion in both data flow tasks.

Please let me know what I am doing wrong.

|||Well, I suppose check and double check your column mappings going into and out of the OLE DB destination.|||

prg wrote:

I am inserting rows using OLEDBDestination and want to redirect all error rows to EXCEL Destination.

I have used Data Conversion Transformation to Convert all strings to Unicode string fields before sending it to Excel Destination.

But its gives the following error.

[Data Conversion [16]] Error: Data conversion failed while converting column 'A' (53) to column "Copy of A" (95). The conversion returned status value 8 and status text "DBSTATUS_UNAVAILABLE".

[Data Conversion [16]] Error: The "output column "Copy of A" (95)" failed because error code 0xC020908E occurred, and the error row disposition on "output column "Copy of A" (95)" specifies failure on error. An error occurred on the specified object of the specified component.

Can someone please tell me what should I do to make it work?

Thanks,

Actually I would check the Data conversion transform. Specifically the column A; since is there where the error is being generated. Try placing a data view to inspected the values that go into the data conversion. What happens if you delete column A from the data conversion? Does it fail in a different column?

|||

I have checked the Data conversion transform again. I have removed column A and used another column. Still it fails.

This is kind of weird. I cant figure out where the problem is .

Is it that EXCEL Destination cannot be used to redirect rows that have errors? Because its seems to work perfectly when I do OLEDBSource -> Data Conversion ->Excel Destination.

|||Well, if the errors are created (redirected) because of conversion errors, then it is natural that the redirected error rows don't match your destination data types. They can't because that's why they are in error.

Make sure that the data types of the Excel destination match the data types of your SOURCE data, not the data types of the OLE DB Destination table. Not sure if this will work because the metadata probably won't match...|||

Hi all,

I think I have exactly the same problem using a sql destination for error output.

Error: 0xC02020C5 at GS_ORDREC, Data Conversion [1827]: Data conversion failed while converting column "OL_DONORD" (102) to column "Test" (1880). The conversion returned status value 8 and status text "DBSTATUS_UNAVAILABLE".

Error: 0xC0209029 at GS_ORDREC, Data Conversion [1827]: The "output column "Test" (1880)" failed because error code 0xC020908E occurred, and the error row disposition on "output column "Test" (1880)" specifies failure on error. An error occurred on the specified object of the specified component.

Error: 0xC0047022 at GS_ORDREC, DTS.Pipeline: The ProcessInput method on component "Data Conversion" (1827) failed with error code 0xC0209029. The identified component returned an error from the ProcessInput method. The error is specific to the component, but the error is fatal and will cause the Data Flow task to stop running.

SSIS package "GS_ODSREC.dtsx" finished: Failure.

-

I check and recheck the package, everything is ok. Strange thing, when I try to convert the "ErrorColumn" field, the package works fine.

Regards

Ayzan

|||

Ayzan wrote:

Hi all,

I think I have exactly the same problem using a sql destination for error output.

Error: 0xC02020C5 at GS_ORDREC, Data Conversion [1827]: Data conversion failed while converting column "OL_DONORD" (102) to column "Test" (1880). The conversion returned status value 8 and status text "DBSTATUS_UNAVAILABLE".

Error: 0xC0209029 at GS_ORDREC, Data Conversion [1827]: The "output column "Test" (1880)" failed because error code 0xC020908E occurred, and the error row disposition on "output column "Test" (1880)" specifies failure on error. An error occurred on the specified object of the specified component.

Error: 0xC0047022 at GS_ORDREC, DTS.Pipeline: The ProcessInput method on component "Data Conversion" (1827) failed with error code 0xC0209029. The identified component returned an error from the ProcessInput method. The error is specific to the component, but the error is fatal and will cause the Data Flow task to stop running.

SSIS package "GS_ODSREC.dtsx" finished: Failure.

-

I check and recheck the package, everything is ok. Strange thing, when I try to convert the "ErrorColumn" field, the package works fine.

Regards

Ayzan

Okay, you *don't* have the same problem as the OP. If you are using a SQL Server destiantion, SSIS will not automatically cast/convert datatypes for you. The data flow datatypes (metadata) will need to match EXACTLY with the SQL Server destination table.|||

Thank you Phil Brammer,

Sorry this IS the same problem, cause I try with a Sql server destination, a data reader destination, a flat file destination, for redirecting the failed rows, and I have still the same message ... in spite of casting/converting datatypes

Regards

Ayzan

|||

Ayzan wrote:

Thank you Phil Brammer,

Sorry this IS the same problem, cause I try with a Sql server destination, a data reader destination, a flat file destination, for redirecting the failed rows, and I have still the same message ... in spite of casting/converting datatypes

Regards

Ayzan

Ah, see, you left out valuable information! So I'll ask this as I have done before. Before we continue, please provide the metadata information as it stands before going into the destination. Then, please provide the table column information for the destination as well.

One other thing to note is that there could be one bad row that is causing this error... Have you redirected errors and inspected them?|||

Ok,

There is a simple Data Flow, so as to reproduce the error:

#Data Source, OLEDB, AdventureWorksDW

#OLE DB Source, SQL Command :

SELECT TOP (10) TimeKey, OrganizationKey, DepartmentGroupKey, ScenarioKey, AccountKey+100 as AccountKey, Amount
FROM FactFinance

#OLE DB Destination

AdventureWorks.FactFinance

#Error Output to Data Conersion

TimeKey -> DT_STR

#DataReader Destination

Enjoy !

|||

Ayzan wrote:

Ok,

There is a simple Data Flow, so as to reproduce the error:

#Data Source, OLEDB, AdventureWorksDW

#OLE DB Source, SQL Command :

SELECT TOP (10) TimeKey, OrganizationKey, DepartmentGroupKey, ScenarioKey, AccountKey+100 as AccountKey, Amount
FROM FactFinance

#OLE DB Destination

AdventureWorks.FactFinance

#Error Output to Data Conersion

TimeKey -> DT_STR

#DataReader Destination

Enjoy !

I don't have a AdventureWorks.FactFinance table. I have the data warehouse table... Are you creating your own FactFinance table in the AdventureWorks table? The mere prefix of "Fact" indicates that it should be in AdventureWorksDW.|||

Sorry,

#OLE DB Destination

AdventureWorksDW.FactFinance

Well in fact, whatever the source/destination, you just have to generate an error while trying to insert new records.

Regards

Ayzan

|||

Ayzan wrote:

Sorry,

#OLE DB Destination

AdventureWorksDW.FactFinance

Well in fact, whatever the source/destination, you just have to generate an error while trying to insert new records.

Regards

Ayzan

I'm utterly confused now. So your source and destination tables are the same? What are you doing exactly, and what are you expecting to see? Why would I select the top 10 records from FactFinance and then turn around and insert them again? Regardless, nowhere should a DT_STR datatype be picked up on the Timekey field because it is an integer field.

Nevermind. I understand now. Hang on.

Data Conversion Error on Excel Destination

I am inserting rows using OLEDBDestination and want to redirect all error rows to EXCEL Destination.

I have used Data Conversion Transformation to Convert all strings to Unicode string fields before sending it to Excel Destination.

But its gives the following error.

[Data Conversion [16]] Error: Data conversion failed while converting column 'A' (53) to column "Copy of A" (95). The conversion returned status value 8 and status text "DBSTATUS_UNAVAILABLE".

[Data Conversion [16]] Error: The "output column "Copy of A" (95)" failed because error code 0xC020908E occurred, and the error row disposition on "output column "Copy of A" (95)" specifies failure on error. An error occurred on the specified object of the specified component.

Can someone please tell me what should I do to make it work?

Thanks,

Did you set the error output of the OLE Destination to "Redirect Rows"?

You might also have metadata problems and could maybe stand to recreate the Excel destination.|||

I have set the error ouput to redirect rows

Actually I have a Data flow task with Oledbsource -->Oledbdestination (Redirect Error Rows ) -- >Data Conversion -- >Excel Destination.

When I do this its gives me the following error.

Data conversion failed while converting column "A" (53) to column "A" (95). The conversion returned status value 8 and status text "DBSTATUS_UNAVAILABL

[Data Conversion [16]] Error: The "output column "Copy of A" (95)" failed because error code 0xC020908E occurred, and the error row disposition on "output column "Copy of A" (95)" specifies failure on error. An error occurred on the specified object of the specified component.

But for this data flow task Oledbsource -- > Data Conversion - > Excel Destination. Its works fine.

I am doing the same data conversion in both data flow tasks.

Please let me know what I am doing wrong.

|||Well, I suppose check and double check your column mappings going into and out of the OLE DB destination.|||

prg wrote:

I am inserting rows using OLEDBDestination and want to redirect all error rows to EXCEL Destination.

I have used Data Conversion Transformation to Convert all strings to Unicode string fields before sending it to Excel Destination.

But its gives the following error.

[Data Conversion [16]] Error: Data conversion failed while converting column 'A' (53) to column "Copy of A" (95). The conversion returned status value 8 and status text "DBSTATUS_UNAVAILABLE".

[Data Conversion [16]] Error: The "output column "Copy of A" (95)" failed because error code 0xC020908E occurred, and the error row disposition on "output column "Copy of A" (95)" specifies failure on error. An error occurred on the specified object of the specified component.

Can someone please tell me what should I do to make it work?

Thanks,

Actually I would check the Data conversion transform. Specifically the column A; since is there where the error is being generated. Try placing a data view to inspected the values that go into the data conversion. What happens if you delete column A from the data conversion? Does it fail in a different column?

|||

I have checked the Data conversion transform again. I have removed column A and used another column. Still it fails.

This is kind of weird. I cant figure out where the problem is .

Is it that EXCEL Destination cannot be used to redirect rows that have errors? Because its seems to work perfectly when I do OLEDBSource -> Data Conversion ->Excel Destination.

|||Well, if the errors are created (redirected) because of conversion errors, then it is natural that the redirected error rows don't match your destination data types. They can't because that's why they are in error.

Make sure that the data types of the Excel destination match the data types of your SOURCE data, not the data types of the OLE DB Destination table. Not sure if this will work because the metadata probably won't match...|||

Hi all,

I think I have exactly the same problem using a sql destination for error output.

Error: 0xC02020C5 at GS_ORDREC, Data Conversion [1827]: Data conversion failed while converting column "OL_DONORD" (102) to column "Test" (1880). The conversion returned status value 8 and status text "DBSTATUS_UNAVAILABLE".

Error: 0xC0209029 at GS_ORDREC, Data Conversion [1827]: The "output column "Test" (1880)" failed because error code 0xC020908E occurred, and the error row disposition on "output column "Test" (1880)" specifies failure on error. An error occurred on the specified object of the specified component.

Error: 0xC0047022 at GS_ORDREC, DTS.Pipeline: The ProcessInput method on component "Data Conversion" (1827) failed with error code 0xC0209029. The identified component returned an error from the ProcessInput method. The error is specific to the component, but the error is fatal and will cause the Data Flow task to stop running.

SSIS package "GS_ODSREC.dtsx" finished: Failure.

-

I check and recheck the package, everything is ok. Strange thing, when I try to convert the "ErrorColumn" field, the package works fine.

Regards

Ayzan

|||

Ayzan wrote:

Hi all,

I think I have exactly the same problem using a sql destination for error output.

Error: 0xC02020C5 at GS_ORDREC, Data Conversion [1827]: Data conversion failed while converting column "OL_DONORD" (102) to column "Test" (1880). The conversion returned status value 8 and status text "DBSTATUS_UNAVAILABLE".

Error: 0xC0209029 at GS_ORDREC, Data Conversion [1827]: The "output column "Test" (1880)" failed because error code 0xC020908E occurred, and the error row disposition on "output column "Test" (1880)" specifies failure on error. An error occurred on the specified object of the specified component.

Error: 0xC0047022 at GS_ORDREC, DTS.Pipeline: The ProcessInput method on component "Data Conversion" (1827) failed with error code 0xC0209029. The identified component returned an error from the ProcessInput method. The error is specific to the component, but the error is fatal and will cause the Data Flow task to stop running.

SSIS package "GS_ODSREC.dtsx" finished: Failure.

-

I check and recheck the package, everything is ok. Strange thing, when I try to convert the "ErrorColumn" field, the package works fine.

Regards

Ayzan

Okay, you *don't* have the same problem as the OP. If you are using a SQL Server destiantion, SSIS will not automatically cast/convert datatypes for you. The data flow datatypes (metadata) will need to match EXACTLY with the SQL Server destination table.|||

Thank you Phil Brammer,

Sorry this IS the same problem, cause I try with a Sql server destination, a data reader destination, a flat file destination, for redirecting the failed rows, and I have still the same message ... in spite of casting/converting datatypes

Regards

Ayzan

|||

Ayzan wrote:

Thank you Phil Brammer,

Sorry this IS the same problem, cause I try with a Sql server destination, a data reader destination, a flat file destination, for redirecting the failed rows, and I have still the same message ... in spite of casting/converting datatypes

Regards

Ayzan

Ah, see, you left out valuable information! So I'll ask this as I have done before. Before we continue, please provide the metadata information as it stands before going into the destination. Then, please provide the table column information for the destination as well.

One other thing to note is that there could be one bad row that is causing this error... Have you redirected errors and inspected them?|||

Ok,

There is a simple Data Flow, so as to reproduce the error:

#Data Source, OLEDB, AdventureWorksDW

#OLE DB Source, SQL Command :

SELECT TOP (10) TimeKey, OrganizationKey, DepartmentGroupKey, ScenarioKey, AccountKey+100 as AccountKey, Amount
FROM FactFinance

#OLE DB Destination

AdventureWorks.FactFinance

#Error Output to Data Conersion

TimeKey -> DT_STR

#DataReader Destination

Enjoy !

|||

Ayzan wrote:

Ok,

There is a simple Data Flow, so as to reproduce the error:

#Data Source, OLEDB, AdventureWorksDW

#OLE DB Source, SQL Command :

SELECT TOP (10) TimeKey, OrganizationKey, DepartmentGroupKey, ScenarioKey, AccountKey+100 as AccountKey, Amount
FROM FactFinance

#OLE DB Destination

AdventureWorks.FactFinance

#Error Output to Data Conersion

TimeKey -> DT_STR

#DataReader Destination

Enjoy !

I don't have a AdventureWorks.FactFinance table. I have the data warehouse table... Are you creating your own FactFinance table in the AdventureWorks table? The mere prefix of "Fact" indicates that it should be in AdventureWorksDW.|||

Sorry,

#OLE DB Destination

AdventureWorksDW.FactFinance

Well in fact, whatever the source/destination, you just have to generate an error while trying to insert new records.

Regards

Ayzan

|||

Ayzan wrote:

Sorry,

#OLE DB Destination

AdventureWorksDW.FactFinance

Well in fact, whatever the source/destination, you just have to generate an error while trying to insert new records.

Regards

Ayzan

I'm utterly confused now. So your source and destination tables are the same? What are you doing exactly, and what are you expecting to see? Why would I select the top 10 records from FactFinance and then turn around and insert them again? Regardless, nowhere should a DT_STR datatype be picked up on the Timekey field because it is an integer field.

Nevermind. I understand now. Hang on.