Tuesday, March 27, 2012
Data file inlezen via Stored Procedure
I want to import a datafile into an sql database with stored procedure. But i have now idee how i must do that.
Also the procedure must be run automatily each morning at 10 a clock.
Greetz TomCreate stored procedure for importing data and setup job for running this procedure.
Monday, March 19, 2012
Data converted when loaded into SQL 2k5 table
Hi All
Data in access is converted when loaded into SQL 2k5
00 >> 0
01 >> 1
03 >> 3
I am using SSIS import wizard to load data from MS Access ’03 into a SQL 2k5 database.Some data of the data is converted, or the leading zero is deleted when loaded into sql table.
The data type on the source field is byte with a 00 format. The data type in 2k5 is tinyint.
I need some help with getting the data to load into 2k5 exactly as it appears in access.
Thanks for you help.
Nats
Hi Nats
One quick question - how is the data going to be used once it has been imported? To store data in the format that you specified then you'll have to decare the column as a text-based datatype, such as VARCHAR, which is not necessarily the best option for numerical data.
If you're going to perform calculations on the data then it might be better to store the data as TINYINT then manipulate the formatting when you want to return / display the data.
e.g.
DECLARE @.int INT
SET @.int = 1
SELECT '00' + RIGHT(CAST(@.int AS VARCHAR(1)), 2)
...will return a text string of '01' even though @.int is of type integer.
Chris
|||I don't think it will be used in any calculation.
Thanks,
Data converted when loaded into SQL 2k5 table
Hi All
Data in access is converted when loaded into SQL 2k5
00 >> 0
01 >> 1
03 >> 3
I am using SSIS import wizard to load data from MS Access ’03 into a SQL 2k5 database.Some data of the data is converted, or the leading zero is deleted when loaded into sql table.
The data type on the source field is byte with a 00 format. The data type in 2k5 is tinyint.
I need some help with getting the data to load into 2k5 exactly as it appears in access.
Thanks for you help.
Nats
Hi Nats
One quick question - how is the data going to be used once it has been imported? To store data in the format that you specified then you'll have to decare the column as a text-based datatype, such as VARCHAR, which is not necessarily the best option for numerical data.
If you're going to perform calculations on the data then it might be better to store the data as TINYINT then manipulate the formatting when you want to return / display the data.
e.g.
DECLARE @.int INT
SET @.int = 1
SELECT '00' + RIGHT(CAST(@.int AS VARCHAR(1)), 2)
...will return a text string of '01' even though @.int is of type integer.
Chris
|||I don't think it will be used in any calculation.
Thanks,
data conversion in during DTS import
I have a text file in which one field is a date but in
file it is in '20030818' format and I am tring to import
that file to a table which has a column datetime.
is it possible to convert '20030818' to datetime during
DTS import?
if yes please let me know how ?
Thanks.use
1 .vbscript tranformation(activex)
2 .select convert(datetime, '20030818')
3. dts datetime conversion facility
rohan
>--Original Message--
>Hi,
>I have a text file in which one field is a date but in
>file it is in '20030818' format and I am tring to import
>that file to a table which has a column datetime.
>is it possible to convert '20030818' to datetime during
>DTS import?
>if yes please let me know how ?
>Thanks.
>.
>|||use
1 .vbscript tranformation(activex)
2 .select convert(datetime, '20030818')
3. dts datetime conversion facility
rohan
>--Original Message--
>Hi,
>I have a text file in which one field is a date but in
>file it is in '20030818' format and I am tring to import
>that file to a table which has a column datetime.
>is it possible to convert '20030818' to datetime during
>DTS import?
>if yes please let me know how ?
>Thanks.
>.
>|||use
1 .vbscript tranformation(activex)
2 .select convert(datetime, '20030818')
3. dts datetime conversion facility
rohan
>--Original Message--
>Hi,
>I have a text file in which one field is a date but in
>file it is in '20030818' format and I am tring to import
>that file to a table which has a column datetime.
>is it possible to convert '20030818' to datetime during
>DTS import?
>if yes please let me know how ?
>Thanks.
>.
>
Sunday, March 11, 2012
Data Conversion Errors on Excel Import into existing table
Recently installed Sql Server 2005 client and am now attempting to import data from a spreadsheet into an existing table. This works fine with Sql Server 2000 but I am getting data conversion truncation errors that stop the process when this runs using import utility in Sql Server 2005.
Any help would be appreciated.
More information needed.
Why is it failing? What does the error message say?
You will need to open up the package and edit it to do the conversions that you require.
-Jamie
Data Conversion Error
Hi,
I am transferring data from a Flat file source to SQL Database. Initially I got validation error saying
"Validation error. Import Employee Information: [DataFlowTask] [6384]: The column "[ColumnName] " can't be inserted because the conversion between types DT_STR and DT_DBTIMESTAMP is not supported." ( i have 6 date fields)
The destination in SQL has the field in "datetime" format.
So i used a "Data Conversion" task to transform the fields, thereby resolving the validation errors. But when i Execute the dtsx I get the following error :
[Data Conversion [322]] Error: Data conversion failed while converting column "[ColumnName]" (79) to column "[Modified_ColumnName]" (346). The conversion returned status value 2 and status text "The value could not be converted because of a potential loss of data.".
[Data Conversion [322]] Error: The "output column "[Modified_ColumnName]" (346)" failed because error code 0xC020907F occurred, and the error row disposition on "output column "[Modified_ColumnName]" (346)" specifies failure on error. An error occurred on the specified object of the specified component.
[DTS.Pipeline] Error: The ProcessInput method on component "Data Conversion" (322) failed with error code 0xC0209029. The identified component returned an error from the ProcessInput method. The error is specific to the component, but the error is fatal and will cause the Data Flow task to stop running.
[DTS.Pipeline] Error: Thread "WorkThread0" has exited with error code 0xC0209029.
["InputFileName" Input [1]] Error: Setting the end of rowset for the buffer failed with error code 0xC0047020.
[DTS.Pipeline] Error: The PrimeOutput method on component "InputFileName Input" (1) returned error code 0xC0209017. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing.
[DTS.Pipeline] Error: Thread "SourceThread0" has exited with error code 0xC0047038.
Can anybody help me with this?
The "The value could not be converted because of a potential loss of data" error generally occurs when you're trying to convert a wider data type value into a data type too narrow to hold it. Can you look at the metadata in your data flow to see what the exact data types are for the source ("[ColumnName]") and target ("[Modified_ColumnName]") columns?|||Can you post some of your sample data in the [ColumnName] column?|||Hi,
I am transferring data from .dat file to tables in SQL DB. The output from .dat file is in [DT_STR] datatype.
[ColumnName1] [ColumnName2]
04/27/1987 0
05/20/1958 0
03/12/1982 1
05/04/1957 2
Error 23 Validation error. [TaskName]: [TableName] [9]: The column "[ColumnName1]" can't be inserted because the conversion between types DT_STR and DT_DBTIMESTAMP is not supported.
The table in the DB has "datetime" as its datatype
Error 26 Validation error. [TaskName]: [TableName] [9]: The column "[ColumnName2]" can't be inserted because the conversion between types DT_STR and DT_I4 is not supported.
The table in the DB has "int" as its datatype
So i used the "data conversion" task in SSIS and resolved the validation error.
But when i executed the dtsx i got the error mentioned in my previous post
|||I'd add a derived column transformation between the source and the destination.In that derived column, you'd cast the values appropriately.
Expressions below:
NewOutputColumnName1: (DT_DBTIMESTAMP)[ColumnName1]
NewOutputColumnName2: (DT_I4)[ColumnName2]
Data contained in a CDATA Section in XML is lost
I am trying to import a large quantity of movie information and all of the reviews, synopsis, etc are contained in CDATA. example:
<synopsis size="100"><![CDATA[Four vignettes feature thugs in a pool hall, a tormented ex-con, a cop and a gangster.]]></synopsis>
Sounds like a good one, no?
The record gets inserted into the database however it contains a NULL in the field for the synopsis text. I would imagine that the reason for this would fall at the feet of CDATA's nature and that SSIS is ignoring it.
Any thoughts would be appreciated. Thanks.
Yes, this is a known issue in RTM (due to a bug in the reader XMLSource adapter calls behind the scene), it should be fixed in SP1 already. Please let us know if you still see issues after applying SP1.
Thanks
Wenyang
Data contained in a CDATA Section in XML is lost
I am trying to import a large quantity of movie information and all of the reviews, synopsis, etc are contained in CDATA. example:
<synopsis size="100"><![CDATA[Four vignettes feature thugs in a pool hall, a tormented ex-con, a cop and a gangster.]]></synopsis>
Sounds like a good one, no?
The record gets inserted into the database however it contains a NULL in the field for the synopsis text. I would imagine that the reason for this would fall at the feet of CDATA's nature and that SSIS is ignoring it.
Any thoughts would be appreciated. Thanks.
Yes, this is a known issue in RTM (due to a bug in the reader XMLSource adapter calls behind the scene), it should be fixed in SP1 already. Please let us know if you still see issues after applying SP1.
Thanks
Wenyang
Wednesday, March 7, 2012
Data being cut off on text import
tab delimited text file. Some of the lines have upwards fo 27,000
characters in it. The first column of data is only 16 characters, the
rest in the second column. When I import it works however end up on
the largest data set to only have 8194 characters of data in the
field. I am importing the data into an NTEXT field but no joy. Any
suggestions?
Thanks.
JR
How are you determining that there are 8194 characters? Are you selecting
the data in Query Analyzer, copying it, and measuring the length? In this
case, the output is limited to 8192 characters, and does not necessarily
truly represent the data in the column. Try applying DATALENGTH() to the
column directly.
"JR" <jriker1@.yahoo.com> wrote in message
news:4d9da209-6bd6-4537-a3f9-184b6544ff49@.l1g2000hsa.googlegroups.com...
>I am trying to use SQL Server 2000 import/export funtion to import a
> tab delimited text file. Some of the lines have upwards fo 27,000
> characters in it. The first column of data is only 16 characters, the
> rest in the second column. When I import it works however end up on
> the largest data set to only have 8194 characters of data in the
> field. I am importing the data into an NTEXT field but no joy. Any
> suggestions?
> Thanks.
> JR
Data being cut off on text import
tab delimited text file. Some of the lines have upwards fo 27,000
characters in it. The first column of data is only 16 characters, the
rest in the second column. When I import it works however end up on
the largest data set to only have 8194 characters of data in the
field. I am importing the data into an NTEXT field but no joy. Any
suggestions?
Thanks.
JRHow are you determining that there are 8194 characters? Are you selecting
the data in Query Analyzer, copying it, and measuring the length? In this
case, the output is limited to 8192 characters, and does not necessarily
truly represent the data in the column. Try applying DATALENGTH() to the
column directly.
"JR" <jriker1@.yahoo.com> wrote in message
news:4d9da209-6bd6-4537-a3f9-184b6544ff49@.l1g2000hsa.googlegroups.com...
>I am trying to use SQL Server 2000 import/export funtion to import a
> tab delimited text file. Some of the lines have upwards fo 27,000
> characters in it. The first column of data is only 16 characters, the
> rest in the second column. When I import it works however end up on
> the largest data set to only have 8194 characters of data in the
> field. I am importing the data into an NTEXT field but no joy. Any
> suggestions?
> Thanks.
> JR
Data being cut off on text import
tab delimited text file. Some of the lines have upwards fo 27,000
characters in it. The first column of data is only 16 characters, the
rest in the second column. When I import it works however end up on
the largest data set to only have 8194 characters of data in the
field. I am importing the data into an NTEXT field but no joy. Any
suggestions?
Thanks.
JRHow are you determining that there are 8194 characters? Are you selecting
the data in Query Analyzer, copying it, and measuring the length? In this
case, the output is limited to 8192 characters, and does not necessarily
truly represent the data in the column. Try applying DATALENGTH() to the
column directly.
"JR" <jriker1@.yahoo.com> wrote in message
news:4d9da209-6bd6-4537-a3f9-184b6544ff49@.l1g2000hsa.googlegroups.com...
>I am trying to use SQL Server 2000 import/export funtion to import a
> tab delimited text file. Some of the lines have upwards fo 27,000
> characters in it. The first column of data is only 16 characters, the
> rest in the second column. When I import it works however end up on
> the largest data set to only have 8194 characters of data in the
> field. I am importing the data into an NTEXT field but no joy. Any
> suggestions?
> Thanks.
> JR
Friday, February 24, 2012
Sunday, February 19, 2012
Daily job runs slow after upgrade to 2005.
daily. We receive this file through an ftp site. This process usually ran
in 15-20 min with SQL 2000. It is now running about 2 1/2 hours which is
unacceptable. Any ideas what could be so different?
The database used is set to compatibility level 90, simple recovery mode,
ansi defaults. We've tried turning off create and udpate of statistics as
this is temporary data used for updating other tables. Nothing seems to
make a difference. I just bet there's something simple I'm missing here..
There is only about 8 tables, 14k-15k rows each and no indexes anywhere we
add those after the import.
Many thanks for any input !!That's not much to go on Tim. I don't know what this XMLSpy does but even 15
minutes is way to long a time to import just a few thousand rows. I would
think 15 seconds would be too long. Can you give some more details on
exactly what it is doing? Do you have SET NOCOUNT ON in the job step?
--
Andrew J. Kelly SQL MVP
"Tim Greenwood" <tim_greenwood A-T yahoo D-O-T com> wrote in message
news:%23yy6klgoGHA.5084@.TK2MSFTNGP03.phx.gbl...
> We have a process that uses XMLSpy to import an xml file of
> products/pricing daily. We receive this file through an ftp site. This
> process usually ran in 15-20 min with SQL 2000. It is now running about 2
> 1/2 hours which is unacceptable. Any ideas what could be so different?
> The database used is set to compatibility level 90, simple recovery mode,
> ansi defaults. We've tried turning off create and udpate of statistics as
> this is temporary data used for updating other tables. Nothing seems to
> make a difference. I just bet there's something simple I'm missing here..
> There is only about 8 tables, 14k-15k rows each and no indexes anywhere we
> add those after the import.
> Many thanks for any input !!
>|||We receive product updates in an XML file. We use XMLSpy because the
company we receive the file from does...just so we can skip any
inconsistencies. XMLSpy uses either SQLOLEDB or SQL Native Client
connection and then creates the necessary tables in an empty database. It
then loads all the product rows from the XML file into the respective
tables. 15 minutes was reasonable we think as XMLSpy is doing an enourmous
amount of string processing and then inserting rows from a workstation into
the server.
XMLSpy is also deriving keys from related data that we specify. These keys
are included in the data but they are just integer columns at that point.
We create the actual indexes after the import is done. This is just another
reason for 15 minutes vs 15 seconds.
At any rate, it does boil down to just reading through an XML file and
inserting rows into a table. I'm not wanting to be skimpy on details but
that's about all there is to it. This isn't a procedure I have control of,
it is a 3rd party COM object we call into to do the import so I cannot
address the NOCOUNT issue. I do not believe it to be the 3rd parties issue
either as it worked fine until now. We've tried this on both SQL2k5 32 and
64 bit servers with the same results.
I've double-checked the recovery method is set to simple...compatibility is
90. Even turned off auto create/update statistics. Would it help any to
use bulk recovery mode? We don't need ANY logging of this data it is
completely transient in nature.
Thanks for responding!!
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:OjwfA7hoGHA.148@.TK2MSFTNGP04.phx.gbl...
> That's not much to go on Tim. I don't know what this XMLSpy does but even
> 15 minutes is way to long a time to import just a few thousand rows. I
> would think 15 seconds would be too long. Can you give some more details
> on exactly what it is doing? Do you have SET NOCOUNT ON in the job step?
> --
> Andrew J. Kelly SQL MVP
> "Tim Greenwood" <tim_greenwood A-T yahoo D-O-T com> wrote in message
> news:%23yy6klgoGHA.5084@.TK2MSFTNGP03.phx.gbl...
>> We have a process that uses XMLSpy to import an xml file of
>> products/pricing daily. We receive this file through an ftp site. This
>> process usually ran in 15-20 min with SQL 2000. It is now running about
>> 2 1/2 hours which is unacceptable. Any ideas what could be so different?
>> The database used is set to compatibility level 90, simple recovery mode,
>> ansi defaults. We've tried turning off create and udpate of statistics
>> as this is temporary data used for updating other tables. Nothing seems
>> to make a difference. I just bet there's something simple I'm missing
>> here..
>> There is only about 8 tables, 14k-15k rows each and no indexes anywhere
>> we add those after the import.
>> Many thanks for any input !!
>|||Tim Greenwood wrote:
> We have a process that uses XMLSpy to import an xml file of products/pricing
> daily. We receive this file through an ftp site. This process usually ran
> in 15-20 min with SQL 2000. It is now running about 2 1/2 hours which is
> unacceptable. Any ideas what could be so different?
> The database used is set to compatibility level 90, simple recovery mode,
> ansi defaults. We've tried turning off create and udpate of statistics as
> this is temporary data used for updating other tables. Nothing seems to
> make a difference. I just bet there's something simple I'm missing here..
> There is only about 8 tables, 14k-15k rows each and no indexes anywhere we
> add those after the import.
> Many thanks for any input !!
>
Ignoring the XML aspect for now, start with the basics, look at Perfmon
while this process is running. Look at Avg disk queue lengths, CPU %.
What other databases are hosted on this machine? Is the transaction log
on the same volume as the data file? Where is TEMPDB? Multi-processor
machine?
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||Is this a new server or OS as well? Is this SQLSpy running ont he same
server as SQL Server? If so are you sure there is enough memory for both?
Have you looked at profiler and perfmon to see what may be going on? You
need to narrow down the possibilities otherwise it's really hard to say.
These may help:
http://www.sql-server-performance.com/sql_server_performance_audit10.asp
Performance Audit
http://www.microsoft.com/technet/prodtechnol/sql/2005/library/operations.mspx
Performance WP's
http://www.swynk.com/friends/vandenberg/perfmonitor.asp Perfmon counters
http://www.sql-server-performance.com/sql_server_performance_audit.asp
Hardware Performance CheckList
http://www.sql-server-performance.com/best_sql_server_performance_tips.asp
SQL 2000 Performance tuning tips
http://www.support.microsoft.com/?id=224587 Troubleshooting App
Performance
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/adminsql/ad_perfmon_24u1.asp
Disk Monitoring
http://sqldev.net/misc/WaitTypes.htm Wait Types
--
Andrew J. Kelly SQL MVP
"Tim Greenwood" <tim_greenwood A-T yahoo D-O-T com> wrote in message
news:OuMtLKioGHA.4024@.TK2MSFTNGP03.phx.gbl...
> We receive product updates in an XML file. We use XMLSpy because the
> company we receive the file from does...just so we can skip any
> inconsistencies. XMLSpy uses either SQLOLEDB or SQL Native Client
> connection and then creates the necessary tables in an empty database. It
> then loads all the product rows from the XML file into the respective
> tables. 15 minutes was reasonable we think as XMLSpy is doing an
> enourmous amount of string processing and then inserting rows from a
> workstation into the server.
> XMLSpy is also deriving keys from related data that we specify. These
> keys are included in the data but they are just integer columns at that
> point. We create the actual indexes after the import is done. This is
> just another reason for 15 minutes vs 15 seconds.
> At any rate, it does boil down to just reading through an XML file and
> inserting rows into a table. I'm not wanting to be skimpy on details but
> that's about all there is to it. This isn't a procedure I have control
> of, it is a 3rd party COM object we call into to do the import so I cannot
> address the NOCOUNT issue. I do not believe it to be the 3rd parties
> issue either as it worked fine until now. We've tried this on both SQL2k5
> 32 and 64 bit servers with the same results.
> I've double-checked the recovery method is set to simple...compatibility
> is 90. Even turned off auto create/update statistics. Would it help any
> to use bulk recovery mode? We don't need ANY logging of this data it is
> completely transient in nature.
> Thanks for responding!!
>
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:OjwfA7hoGHA.148@.TK2MSFTNGP04.phx.gbl...
>> That's not much to go on Tim. I don't know what this XMLSpy does but even
>> 15 minutes is way to long a time to import just a few thousand rows. I
>> would think 15 seconds would be too long. Can you give some more details
>> on exactly what it is doing? Do you have SET NOCOUNT ON in the job step?
>> --
>> Andrew J. Kelly SQL MVP
>> "Tim Greenwood" <tim_greenwood A-T yahoo D-O-T com> wrote in message
>> news:%23yy6klgoGHA.5084@.TK2MSFTNGP03.phx.gbl...
>> We have a process that uses XMLSpy to import an xml file of
>> products/pricing daily. We receive this file through an ftp site. This
>> process usually ran in 15-20 min with SQL 2000. It is now running about
>> 2 1/2 hours which is unacceptable. Any ideas what could be so
>> different?
>> The database used is set to compatibility level 90, simple recovery
>> mode, ansi defaults. We've tried turning off create and udpate of
>> statistics as this is temporary data used for updating other tables.
>> Nothing seems to make a difference. I just bet there's something simple
>> I'm missing here..
>> There is only about 8 tables, 14k-15k rows each and no indexes anywhere
>> we add those after the import.
>> Many thanks for any input !!
>>
>|||Yes I will run profiler on monday while this is running. To answer your
questions...we have 7 db's on this server. All tlogs are on their own
mirrored set of spindles and tempdb is on it's own mirrored set as well. It
is a dual 64-bit machine. 64bit sql2005 and 64bit windows server 2003
enterprise.
"Tracy McKibben" <tracy@.realsqlguy.com> wrote in message
news:OY4gPrioGHA.4036@.TK2MSFTNGP05.phx.gbl...
> Tim Greenwood wrote:
>> We have a process that uses XMLSpy to import an xml file of
>> products/pricing daily. We receive this file through an ftp site. This
>> process usually ran in 15-20 min with SQL 2000. It is now running about
>> 2 1/2 hours which is unacceptable. Any ideas what could be so different?
>> The database used is set to compatibility level 90, simple recovery mode,
>> ansi defaults. We've tried turning off create and udpate of statistics
>> as this is temporary data used for updating other tables. Nothing seems
>> to make a difference. I just bet there's something simple I'm missing
>> here..
>> There is only about 8 tables, 14k-15k rows each and no indexes anywhere
>> we add those after the import.
>> Many thanks for any input !!
> Ignoring the XML aspect for now, start with the basics, look at Perfmon
> while this process is running. Look at Avg disk queue lengths, CPU %.
> What other databases are hosted on this machine? Is the transaction log
> on the same volume as the data file? Where is TEMPDB? Multi-processor
> machine?
>
> --
> Tracy McKibben
> MCDBA
> http://www.realsqlguy.com
Daily job runs slow after upgrade to 2005.
daily. We receive this file through an ftp site. This process usually ran
in 15-20 min with SQL 2000. It is now running about 2 1/2 hours which is
unacceptable. Any ideas what could be so different?
The database used is set to compatibility level 90, simple recovery mode,
ansi defaults. We've tried turning off create and udpate of statistics as
this is temporary data used for updating other tables. Nothing seems to
make a difference. I just bet there's something simple I'm missing here..
There is only about 8 tables, 14k-15k rows each and no indexes anywhere we
add those after the import.
Many thanks for any input !!That's not much to go on Tim. I don't know what this XMLSpy does but even 15
minutes is way to long a time to import just a few thousand rows. I would
think 15 seconds would be too long. Can you give some more details on
exactly what it is doing? Do you have SET NOCOUNT ON in the job step?
Andrew J. Kelly SQL MVP
"Tim Greenwood" <tim_greenwood A-T yahoo D-O-T com> wrote in message
news:%23yy6klgoGHA.5084@.TK2MSFTNGP03.phx.gbl...
> We have a process that uses XMLSpy to import an xml file of
> products/pricing daily. We receive this file through an ftp site. This
> process usually ran in 15-20 min with SQL 2000. It is now running about 2
> 1/2 hours which is unacceptable. Any ideas what could be so different?
> The database used is set to compatibility level 90, simple recovery mode,
> ansi defaults. We've tried turning off create and udpate of statistics as
> this is temporary data used for updating other tables. Nothing seems to
> make a difference. I just bet there's something simple I'm missing here..
> There is only about 8 tables, 14k-15k rows each and no indexes anywhere we
> add those after the import.
> Many thanks for any input !!
>|||We receive product updates in an XML file. We use XMLSpy because the
company we receive the file from does...just so we can skip any
inconsistencies. XMLSpy uses either SQLOLEDB or SQL Native Client
connection and then creates the necessary tables in an empty database. It
then loads all the product rows from the XML file into the respective
tables. 15 minutes was reasonable we think as XMLSpy is doing an enourmous
amount of string processing and then inserting rows from a workstation into
the server.
XMLSpy is also deriving keys from related data that we specify. These keys
are included in the data but they are just integer columns at that point.
We create the actual indexes after the import is done. This is just another
reason for 15 minutes vs 15 seconds.
At any rate, it does boil down to just reading through an XML file and
inserting rows into a table. I'm not wanting to be skimpy on details but
that's about all there is to it. This isn't a procedure I have control of,
it is a 3rd party COM object we call into to do the import so I cannot
address the NOCOUNT issue. I do not believe it to be the 3rd parties issue
either as it worked fine until now. We've tried this on both SQL2k5 32 and
64 bit servers with the same results.
I've double-checked the recovery method is set to simple...compatibility is
90. Even turned off auto create/update statistics. Would it help any to
use bulk recovery mode? We don't need ANY logging of this data it is
completely transient in nature.
Thanks for responding!!
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:OjwfA7hoGHA.148@.TK2MSFTNGP04.phx.gbl...
> That's not much to go on Tim. I don't know what this XMLSpy does but even
> 15 minutes is way to long a time to import just a few thousand rows. I
> would think 15 seconds would be too long. Can you give some more details
> on exactly what it is doing? Do you have SET NOCOUNT ON in the job step?
> --
> Andrew J. Kelly SQL MVP
> "Tim Greenwood" <tim_greenwood A-T yahoo D-O-T com> wrote in message
> news:%23yy6klgoGHA.5084@.TK2MSFTNGP03.phx.gbl...
>|||Tim Greenwood wrote:
> We have a process that uses XMLSpy to import an xml file of products/prici
ng
> daily. We receive this file through an ftp site. This process usually ra
n
> in 15-20 min with SQL 2000. It is now running about 2 1/2 hours which is
> unacceptable. Any ideas what could be so different?
> The database used is set to compatibility level 90, simple recovery mode,
> ansi defaults. We've tried turning off create and udpate of statistics as
> this is temporary data used for updating other tables. Nothing seems to
> make a difference. I just bet there's something simple I'm missing here..
> There is only about 8 tables, 14k-15k rows each and no indexes anywhere we
> add those after the import.
> Many thanks for any input !!
>
Ignoring the XML aspect for now, start with the basics, look at Perfmon
while this process is running. Look at Avg disk queue lengths, CPU %.
What other databases are hosted on this machine? Is the transaction log
on the same volume as the data file? Where is TEMPDB? Multi-processor
machine?
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||Is this a new server or OS as well? Is this SQLSpy running ont he same
server as SQL Server? If so are you sure there is enough memory for both?
Have you looked at profiler and perfmon to see what may be going on? You
need to narrow down the possibilities otherwise it's really hard to say.
These may help:
http://www.sql-server-performance.c...nce_audit10.asp
Performance Audit
http://www.microsoft.com/technet/pr...perfmonitor.asp Perfmon counters
http://www.sql-server-performance.c...mance_audit.asp
Hardware Performance CheckList
http://www.sql-server-performance.c...rmance_tips.asp
SQL 2000 Performance tuning tips
http://www.support.microsoft.com/?id=224587 Troubleshooting App
Performance
http://msdn.microsoft.com/library/d.../>
on_24u1.asp
Disk Monitoring
http://sqldev.net/misc/WaitTypes.htm Wait Types
Andrew J. Kelly SQL MVP
"Tim Greenwood" <tim_greenwood A-T yahoo D-O-T com> wrote in message
news:OuMtLKioGHA.4024@.TK2MSFTNGP03.phx.gbl...
> We receive product updates in an XML file. We use XMLSpy because the
> company we receive the file from does...just so we can skip any
> inconsistencies. XMLSpy uses either SQLOLEDB or SQL Native Client
> connection and then creates the necessary tables in an empty database. It
> then loads all the product rows from the XML file into the respective
> tables. 15 minutes was reasonable we think as XMLSpy is doing an
> enourmous amount of string processing and then inserting rows from a
> workstation into the server.
> XMLSpy is also deriving keys from related data that we specify. These
> keys are included in the data but they are just integer columns at that
> point. We create the actual indexes after the import is done. This is
> just another reason for 15 minutes vs 15 seconds.
> At any rate, it does boil down to just reading through an XML file and
> inserting rows into a table. I'm not wanting to be skimpy on details but
> that's about all there is to it. This isn't a procedure I have control
> of, it is a 3rd party COM object we call into to do the import so I cannot
> address the NOCOUNT issue. I do not believe it to be the 3rd parties
> issue either as it worked fine until now. We've tried this on both SQL2k5
> 32 and 64 bit servers with the same results.
> I've double-checked the recovery method is set to simple...compatibility
> is 90. Even turned off auto create/update statistics. Would it help any
> to use bulk recovery mode? We don't need ANY logging of this data it is
> completely transient in nature.
> Thanks for responding!!
>
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:OjwfA7hoGHA.148@.TK2MSFTNGP04.phx.gbl...
>|||Yes I will run profiler on monday while this is running. To answer your
questions...we have 7 db's on this server. All tlogs are on their own
mirrored set of spindles and tempdb is on it's own mirrored set as well. It
is a dual 64-bit machine. 64bit sql2005 and 64bit windows server 2003
enterprise.
"Tracy McKibben" <tracy@.realsqlguy.com> wrote in message
news:OY4gPrioGHA.4036@.TK2MSFTNGP05.phx.gbl...
> Tim Greenwood wrote:
> Ignoring the XML aspect for now, start with the basics, look at Perfmon
> while this process is running. Look at Avg disk queue lengths, CPU %.
> What other databases are hosted on this machine? Is the transaction log
> on the same volume as the data file? Where is TEMPDB? Multi-processor
> machine?
>
> --
> Tracy McKibben
> MCDBA
> http://www.realsqlguy.com
Daily Import of file with date
I have an SSIS package that imports daily files that have the date in the file name. I'd like my SSIS package to pickup today's file and run the package.
I've been using expression builder to create the file name but it drops the leading zeros in the month and the day.
My Import file looks like this: SRMSNotes_DATA_20080118.txt
Using this code: "D:\\importdata\\srmsdata\\SRMSNotes_DATA_" + (DT_WSTR, 4) YEAR( GETDATE() ) + (DT_WSTR, 2)MONTH( GETDATE() ) + (DT_WSTR, 2) DAY( GETDATE() ) + ".txt"
SSIS looks for this file: SRMSNotes_DATA_2008118.txt
How do I make SSIS add the leading zeros?
Thanks for you help"D:\\importdata\\srmsdata\\SRMSNotes_DATA_" +
(DT_WSTR, 4) YEAR( GETDATE())
+
((DATEPART("mm",GETDATE()) ) < 10 ? ("0" + (DT_WSTR, 2)MONTH( GETDATE())) : (DT_WSTR, 2)MONTH( GETDATE()))
+
(DT_WSTR, 2) DAY( GETDATE() ) + ".txt"
You'd need to apply something similair for the DAY part|||To Get this:
20080213004235
YYYYMMDDHHMMSS
I use:
(DT_WSTR, 4) YEAR(GETDATE())
+
RIGHT(("0" + (DT_WSTR, 2) MONTH( GETDATE())),2)
+
RIGHT(("0" + (DT_WSTR, 2) DAY( GETDATE())),2)
+
RIGHT(("0" + (DT_WSTR, 2) DATEPART("hh", GETDATE())),2)
+
RIGHT(("0" + (DT_WSTR, 2) DATEPART("n", GETDATE())),2)
+
RIGHT(("0" + (DT_WSTR, 2) DATEPART("s", GETDATE())),2)
To Get this:
20080213004235
YYYYMMDDHHMMSS
daily import into ms sql from access or excel
ms sql database daily.orders(basic customer info), items(product info and quanity), options (options of quantity). The problem is that 3 table has to get additional column when it is imported into sql. For example, when an order comes in its intranet application, it will keep track if it is in stock or out of stock checking the product table. those 2 databases ms sql and access or excel has identical columns but sql one has more in addition . This is my first time to work this kind of case and if somebody could give me a abstract step or suggestions to accomplish this.I'll be very happy;)
Thanks
kiss... use default values and/or dts to transfer the data to it's respective area.
if you wish to create an application to handle it, then just make your sql more precise to match your needs.
no need for kisses. It's unprofessional.
Friday, February 17, 2012
Dabase Creation
Yes.
In MSAccess, go to the Tools->Database Utilities menu. Select the Upsizing Wizard.
If you are lucky, it will work. If not, there are other options (DTS).