Monday, March 19, 2012
data convertion with dynamic sql
I;m new to SQl server and having a problem with dynamic sql within an
procedure.
I got error(the error msg and procedure are below) when passing an smallint
value (@.batch_ID)to the dynamic sql, do I need to conver it to didferent
datatype, how to do it?
THanks a lot.
Here is the error message I got:
**********
0
ST_ccst_code_value
Msg 245, Level 16, State 1, Procedure CR_Update_Stage_tables, Line 23
Conversion failed when converting the nvarchar value 'Update
ST_ccst_code_value Set DTS_BATCH_ID =' to data type smallint.
Here is the procedure:
****
ALTER PROCEDURE [dbo].[CR_Update_Stage_tables]
@.BatchName nvarchar(30)
AS
Declare @.batch_ID smallint
select @.batch_ID = 0
Declare @.SQL VarChar(1000)
Declare @.ST_table nvarchar(30)
exec SP_CR_Get_Batch_ID @.BatchName, @.batch_ID output
print @.batch_ID
select @.ST_table = 'ST_'+ @.BatchName
print @.ST_table
SELECT @.SQL = 'Update ' + @.ST_table + ' Set DTS_BATCH_ID ='+ @.batch_ID
Exec (@.SQL)
GOWhen you try to add (concatenate) the value for @.batch_ID to the query
string, SQL tries to implicitly convert the string to an integer, because in
t
has a higher data type precedence than char data types.
Try:
SELECT @.SQL = 'Update ' + @.ST_table + ' Set DTS_BATCH_ID ='+ CAST(@.batch_ID
as nvarchar(10))
Exec (@.SQL)
"Jessie" wrote:
> Hi, All,
> I;m new to SQl server and having a problem with dynamic sql within an
> procedure.
> I got error(the error msg and procedure are below) when passing an smallin
t
> value (@.batch_ID)to the dynamic sql, do I need to conver it to didferent
> datatype, how to do it?
> THanks a lot.
> Here is the error message I got:
> **********
> 0
> ST_ccst_code_value
> Msg 245, Level 16, State 1, Procedure CR_Update_Stage_tables, Line 23
> Conversion failed when converting the nvarchar value 'Update
> ST_ccst_code_value Set DTS_BATCH_ID =' to data type smallint.
> Here is the procedure:
> ****
> ALTER PROCEDURE [dbo].[CR_Update_Stage_tables]
> @.BatchName nvarchar(30)
> AS
> Declare @.batch_ID smallint
> select @.batch_ID = 0
> Declare @.SQL VarChar(1000)
> Declare @.ST_table nvarchar(30)
>
> exec SP_CR_Get_Batch_ID @.BatchName, @.batch_ID output
> print @.batch_ID
> select @.ST_table = 'ST_'+ @.BatchName
> print @.ST_table
> SELECT @.SQL = 'Update ' + @.ST_table + ' Set DTS_BATCH_ID ='+ @.batch_ID
> Exec (@.SQL)
> GO
Thursday, March 8, 2012
Data change during backup.
Dynamic Backups
SQL Server 7.0's backup process is faster than SQL Server 6.5's process.
Let's look at how these two processes differ. When SQL Server begins a
backup, it notes the Log Sequence Number (LSN) of the oldest active
transaction and performs a checkpoint, which synchronizes the pages on disk
with the pages in the cache memory. Then, SQL Server starts the backup,
reading from the hard disk-not from the cache. In SQL Server 6.5, if a user
needs to update a record, SQL Server allows the update if the backup process
has already backed up the record. Otherwise, SQL Server holds up the request
for a moment-long enough for the backup process to jump ahead and back up the
extent containing that record. SQL Server 6.5 then lets the update request
proceed and resumes the backup process at the point it was when it was
interrupted. When SQL Server 6.5 reaches this extent again, the backup
process skips it, because the process has already backed up this extent.
SQL Server 7.0, in contrast, doesn't worry about whether users are reading
or changing pages. SQL Server 7.0 just backs up the extents sequentially,
which is faster than jumping around as SQL Server 6.5 does. Because SQL
Server 7.0 doesn't jump ahead to back up extents before users change data,
you could end up with inconsistent data. However, SQL Server 7.0 also
introduced the ability to capture data changes that users make while the
backup is in progress. When SQL Server 7.0 reaches the end of the data, the
backup process backs up the transaction log, capturing the changes users made
during the backup process. Although dynamic backup comes with a performance
penalty, Microsoft promises no more than about a 6 percent to 7 percent
performance reduction, which most users would never notice. Scheduling
backups during low database activity is still a good idea, but if you have to
back up the transaction log several times a day, you won't be able to avoid
having some users connected.
Because a backup can take considerable time, SQL Server 7.0's process is a
welcome enhancement. Pre-SQL Server 7.0 releases back up the data as it is
when SQL Server begins the backup; SQL Server 7.0 backs up the data as it is
when SQL Server finishes the backup.
If you're backing up just the transaction log, at the end of the backup
process, SQL Server 7.0 truncates the log, removing all transactions before
the LSN it recorded for the oldest ongoing transaction. Truncating the log
frees up space in the log and keeps it from filling up. (The log could still
fill up, however, if you have a long-running transaction that isn't
completing.) Remember that SQL Server doesn't truncate any log entry that has
an LSN greater than that of the oldest active transaction.
The question is:
What is the behavior of Sql Server 2000, when a long-running transaction
work during backup?
SQL Server 2000 backs up the data as it is when SQL Server begin or finishes
the backup ?Same as 7.0. Data pages are backuped as they are, and the transaction log records generated during
the backup process are also included. If the transaction isn't finished at end-time of the backup,
the COMMIT log records isn't included in the backup and when you restore the backup, the transaction
will be rolled back.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
http://www.sqlug.se/
"andrea favero" <andreafavero@.discussions.microsoft.com> wrote in message
news:89E94015-4FD0-4EE6-97E0-FAE4E591919B@.microsoft.com...
>I have found this article:
> Dynamic Backups
> SQL Server 7.0's backup process is faster than SQL Server 6.5's process.
> Let's look at how these two processes differ. When SQL Server begins a
> backup, it notes the Log Sequence Number (LSN) of the oldest active
> transaction and performs a checkpoint, which synchronizes the pages on disk
> with the pages in the cache memory. Then, SQL Server starts the backup,
> reading from the hard disk-not from the cache. In SQL Server 6.5, if a user
> needs to update a record, SQL Server allows the update if the backup process
> has already backed up the record. Otherwise, SQL Server holds up the request
> for a moment-long enough for the backup process to jump ahead and back up the
> extent containing that record. SQL Server 6.5 then lets the update request
> proceed and resumes the backup process at the point it was when it was
> interrupted. When SQL Server 6.5 reaches this extent again, the backup
> process skips it, because the process has already backed up this extent.
> SQL Server 7.0, in contrast, doesn't worry about whether users are reading
> or changing pages. SQL Server 7.0 just backs up the extents sequentially,
> which is faster than jumping around as SQL Server 6.5 does. Because SQL
> Server 7.0 doesn't jump ahead to back up extents before users change data,
> you could end up with inconsistent data. However, SQL Server 7.0 also
> introduced the ability to capture data changes that users make while the
> backup is in progress. When SQL Server 7.0 reaches the end of the data, the
> backup process backs up the transaction log, capturing the changes users made
> during the backup process. Although dynamic backup comes with a performance
> penalty, Microsoft promises no more than about a 6 percent to 7 percent
> performance reduction, which most users would never notice. Scheduling
> backups during low database activity is still a good idea, but if you have to
> back up the transaction log several times a day, you won't be able to avoid
> having some users connected.
> Because a backup can take considerable time, SQL Server 7.0's process is a
> welcome enhancement. Pre-SQL Server 7.0 releases back up the data as it is
> when SQL Server begins the backup; SQL Server 7.0 backs up the data as it is
> when SQL Server finishes the backup.
> If you're backing up just the transaction log, at the end of the backup
> process, SQL Server 7.0 truncates the log, removing all transactions before
> the LSN it recorded for the oldest ongoing transaction. Truncating the log
> frees up space in the log and keeps it from filling up. (The log could still
> fill up, however, if you have a long-running transaction that isn't
> completing.) Remember that SQL Server doesn't truncate any log entry that has
> an LSN greater than that of the oldest active transaction.
> The question is:
> What is the behavior of Sql Server 2000, when a long-running transaction
> work during backup?
> SQL Server 2000 backs up the data as it is when SQL Server begin or finishes
> the backup ?
Friday, February 17, 2012
DAC doen't work with explicit port number
Hello everybody,
After install SQL 2005 I noticed than DAC (Dedicated Admin Connection) is started with dynamic port.
2006-08-08 12:19:47.870 Server Server is listening on [ 'any' <ipv4> 14330].
2006-08-08 12:19:47.880 Server Server is listening on [ 'any' <ipv4> 1433].
2006-08-08 12:19:47.880 Server Server local connection provider is ready to accept connection on [ \\.\pipe\SQLLocal\A ].
2006-08-08 12:19:47.880 Server Server local connection provider is ready to accept connection on [ \\.\pipe\MSSQL$A\sql\query ].
2006-08-08 12:19:47.900 Server Server is listening on [ 'any' <ipv4> 1212].
2006-08-08 12:19:47.900 Server Dedicated admin connection support was established for listening remotely on port 1212.
My question:
Is possible fix port for DAC?
How can I use DAC behind a firewall if I don't know witch port it will use?
Ok, the BOL say: If SQL Server is configured to accept remote administration connections, the DAC must be initiated with an explicit port number:
sqlcmd –Stcp:<server>,<port> (By the way, not is need -A parameter ?)
but it doen't work for me!
Por default connection I have:
C:\sqlcmd -A -SSERVER01\A -Usa -P123456 -dmaster
sys.dm_exec_connections say:
connect_time net_transport client_tcp_port local_tcp_port
-- - --
2006-08-08 12:24:05.750 TCP 3143 1433
2006-08-08 12:49:56.513 TCP 3273 1212
Specifying port I have:
C:\sqlcmd -A -Stcp:SERVER01,1434 -Usa -P123456 -dmaster
but I get "Sqlcmd: Error: Microsoft SQL Native Client : Login timeout expired."
PS: I try with any port but always receive timeout !!!
May someone help please..
Thx
Nilton Pinheiro
Hello peoples,
Please....
Is possible fix port for DAC?
How can I use DAC behind a firewall if I don't know witch port it will use?
thx.
Nilton Pinheiro