Showing posts with label certain. Show all posts
Showing posts with label certain. Show all posts

Sunday, March 25, 2012

Data encryption and keys

Hi,
I would like to encrypt data in my database. I want encrypted column value to be viewable only for certain group of users. Users that has access to my database doesn't meant they can access to my encrypted data.

Currently, I am using the following "approach" as my key management.

create master key encryption by password= 'MasterKeyPass'

CREATE ASYMMETRIC KEY MyAsymmKey AUTHORIZATION MyUser
WITH ALGORITHM = RSA_1024
ENCRYPTION BY PASSWORD ='MyAsymmPass'

CREATE SYMMETRIC KEY MySymmKey WITH ALGORITHM = DES
ENCRYPTION BY ASYMMETRIC KEY MyAsymmKey

My data will be encrypted using Symmetric key MySymmKey.

User who want to access my data must have MasterKey and MyAsymmKey password.
Is it OK? Any better way?

Thank you

As long as the user you are trying to protect against is not a dbo or sysadmin, you can also use permissions (i.e. "GRANT CONTROL ON ASYMMETRIC KEY :: MyAsymmKey TO user1") to restrict access rather than through passwords. The advantage is the user then doesn't have to depend on memorizing a password and you don't have to pass any password values in which is safer from a security standpoint.

Sung

|||Fyi, Books online links up a section about BACKUP and RESTORING encryption keys http://msdn2.microsoft.com/en-US/library/ms157275.aspx link.

Thursday, March 22, 2012

Data driven query - update only CERTAIN fields?

Hi,
I'm trying to set up a Data Driven Query task, to update only certain
fields in a table.
However, even though the update query only contains the fields I'm
wanting to update, when I try and run it I get:
"One or more destination parameter columns had no transform specified"
Thing is, I don't WANT to specify a transform for most of the
destination columns - I want them left alone!!
I would be so, so grateful if anyone who's done something like this
could help, as there are so precious few decent example of this sort of
thing on the net. Any links anyone has to good in-depth tutorials
covering more than just the basic 'update every single field' scenario
would be fantastic too.
Many, many thanks folks.
ChampersJust to quickly illustrate this (I'm not sure I explained this too well
yesterday)
The destination table (which is my binding table) has, let's say, 10
columns
During the update query, I only want maybe 2 fields to be updated
So my code would be something like...
Function Main()
If
IsEmpty(DTSLookups("DoesRecordExist").Execute(DTSSource("PersonID").Value))
Then
DTSDestination("Field3") = DTSSource("SURNAME")
DTSDestination("Field7") = DTSSource("FORENAME")
Main = DTSTransformstat_InsertQuery
Else..
End If
End Function
But I get the feeling that to avoid the error message above, I still
have to specify all the destination columns, even if I don't want to
update them...but, of course, I would have to set them to update to
something...which I don't want to do!
This is driving me crazy. Thanks in advance for any advice!|||I think I may have just cracked this, so I'll post the answer for
anyone else struggling with it. Basically, on the Transformations tab
(the one with the graphical list of all source and destination
columns), you have to select ALL destination columns (and presumably
all source ones too) regardless of whether you're using them in a query
or not. This is REALLY confusing, and I have not been able to find a
tutorial that explains this anywhere. I'm going to press on now with my
DTS task, and I'l post any other useful info I find on this thread, as
I'm sure other people must have been tearing their hair out over this.|||I'm nearly there with this now, but I have to admit it's such a
confusing thing to use.
I just have one more question that someone could maybe answer - I have
2 DDQ's, one to transfer data from some columns of the source table
into table 1, and another to transfer other columns into a separate
table, table 2.
Now, the first DDQ is OK. However, in the second, one of my queries
refers to a source column that doesn't directly transfer to a column in
table 2. Table 2 here is my binding table.
In order to 'reference' the source column, I'm having to basically map
that source column to a column in the binding 'version' of table 2 (one
that I'm not 'using' in this DDQ), so that I can use it in the
Parameters list.
Is this the correct way to do this? i.e. although the binding table
used is originally a real destination table, it's only actually used as
a way of mapping and referencing source colunmns, and in actual fact
bears no relevance to any data transformations (i.e. this 'mapping'
doesn't actually alter data in the destination column - only my QUERY
can does this, in which the destination table itself is used as a real
query destination table, rather than as the binding table).
Sorry to be so verbose...I'd be grateful if someone could set my mind
at ease and clarify that I've got this right in my head!
Thanks so much guys.

Monday, March 19, 2012

data design question - record versioning

Wanted to get feedback from everyone. Current project requires the
need on certain key tables to keep versions of records, and to
possibly use those older versions to selectively rollback. The
business needs the ability to selectively rollback everything that
happened in the data load last Wednesday, for example, but keep all
data loads since then. So I'm exploring history/auditing tables,
filled by triggers, but it seems like a huge pain. I also saw an
article by Ben Allfree ( http://www.codeproject.com/useritem...ing.
asp
) on codeproject that was interesting, but was conceptually quite
different, with some detractors in the feedback. Surely this is a
common issue. What are everyone's thoughts?Common issue, no common answer. I'd say it is dependent on your application
(just like we DBA like to put it). The ratio of active and inactive records
plays an important role in here.
You may also consider set up (indexed) view to help address the issue.
Quentin
"CoreyB" <unc27932@.yahoo.com> wrote in message
news:1180029403.355004.23520@.q69g2000hsb.googlegroups.com...
> Wanted to get feedback from everyone. Current project requires the
> need on certain key tables to keep versions of records, and to
> possibly use those older versions to selectively rollback. The
> business needs the ability to selectively rollback everything that
> happened in the data load last Wednesday, for example, but keep all
> data loads since then. So I'm exploring history/auditing tables,
> filled by triggers, but it seems like a huge pain. I also saw an
> article by Ben Allfree (
> http://www.codeproject.com/useritem...dVersioning.asp
> ) on codeproject that was interesting, but was conceptually quite
> different, with some detractors in the feedback. Surely this is a
> common issue. What are everyone's thoughts?
>|||Hmmmm - that's what I was afraid the answer would be. I was hoping
there was a magical, documented, easy to implement solution.
There will be one and only one active record at any given time. There
may be 1-n history records. But realistically I doubt there would be
less than 15 history records associated with any given record. And
probably < 10% of all records would have any history at all. Majority
would be fresh inserts.
On May 25, 12:10 pm, "Quentin Ran" <remove_this_qr...@.yahoo.com>
wrote:
> Common issue, no common answer. I'd say it is dependent on your applicati
on
> (just like we DBA like to put it). The ratio of active and inactive recor
ds
> plays an important role in here.
> You may also consider set up (indexed) view to help address the issue.
> Quentin
> "CoreyB" <unc27...@.yahoo.com> wrote in message
> news:1180029403.355004.23520@.q69g2000hsb.googlegroups.com...
>
>
> - Show quoted text -|||I'd go with the active/passive flag. Sounds at most the inactive records
would be as many as the active records, if not drastically less. Putting
adequate index (on the columns PK and that states "same group") will almost
eliminate any access contention problems due to this active/passive issue.
<unc27932@.yahoo.com> wrote in message
news:1180110598.547933.90790@.o5g2000hsb.googlegroups.com...
> Hmmmm - that's what I was afraid the answer would be. I was hoping
> there was a magical, documented, easy to implement solution.
> There will be one and only one active record at any given time. There
> may be 1-n history records. But realistically I doubt there would be
> less than 15 history records associated with any given record. And
> probably < 10% of all records would have any history at all. Majority
> would be fresh inserts.
> On May 25, 12:10 pm, "Quentin Ran" <remove_this_qr...@.yahoo.com>
> wrote:
>|||Quentin Ran wrote:
> I'd go with the active/passive flag. Sounds at most the inactive records
> would be as many as the active records, if not drastically less. Putting
> adequate index (on the columns PK and that states "same group") will almos
t
> eliminate any access contention problems due to this active/passive issue.
I would also consider having a separate history table instead of
active/inactive flag.
Since the rollover/rollback operations are expected to happen
significantly less often than active record access/update, keeping the
history in a separate table may make sense.
Y.S.

> <unc27932@.yahoo.com> wrote in message
> news:1180110598.547933.90790@.o5g2000hsb.googlegroups.com...
>

data design question - record versioning

Wanted to get feedback from everyone. Current project requires the
need on certain key tables to keep versions of records, and to
possibly use those older versions to selectively rollback. The
business needs the ability to selectively rollback everything that
happened in the data load last Wednesday, for example, but keep all
data loads since then. So I'm exploring history/auditing tables,
filled by triggers, but it seems like a huge pain. I also saw an
article by Ben Allfree ( http://www.codeproject.com/useritems/LisRecordVersioning.asp
) on codeproject that was interesting, but was conceptually quite
different, with some detractors in the feedback. Surely this is a
common issue. What are everyone's thoughts?
Common issue, no common answer. I'd say it is dependent on your application
(just like we DBA like to put it). The ratio of active and inactive records
plays an important role in here.
You may also consider set up (indexed) view to help address the issue.
Quentin
"CoreyB" <unc27932@.yahoo.com> wrote in message
news:1180029403.355004.23520@.q69g2000hsb.googlegro ups.com...
> Wanted to get feedback from everyone. Current project requires the
> need on certain key tables to keep versions of records, and to
> possibly use those older versions to selectively rollback. The
> business needs the ability to selectively rollback everything that
> happened in the data load last Wednesday, for example, but keep all
> data loads since then. So I'm exploring history/auditing tables,
> filled by triggers, but it seems like a huge pain. I also saw an
> article by Ben Allfree (
> http://www.codeproject.com/useritems/LisRecordVersioning.asp
> ) on codeproject that was interesting, but was conceptually quite
> different, with some detractors in the feedback. Surely this is a
> common issue. What are everyone's thoughts?
>
|||Hmmmm - that's what I was afraid the answer would be. I was hoping
there was a magical, documented, easy to implement solution.
There will be one and only one active record at any given time. There
may be 1-n history records. But realistically I doubt there would be
less than 15 history records associated with any given record. And
probably < 10% of all records would have any history at all. Majority
would be fresh inserts.
On May 25, 12:10 pm, "Quentin Ran" <remove_this_qr...@.yahoo.com>
wrote:
> Common issue, no common answer. I'd say it is dependent on your application
> (just like we DBA like to put it). The ratio of active and inactive records
> plays an important role in here.
> You may also consider set up (indexed) view to help address the issue.
> Quentin
> "CoreyB" <unc27...@.yahoo.com> wrote in message
> news:1180029403.355004.23520@.q69g2000hsb.googlegro ups.com...
>
>
> - Show quoted text -
|||I'd go with the active/passive flag. Sounds at most the inactive records
would be as many as the active records, if not drastically less. Putting
adequate index (on the columns PK and that states "same group") will almost
eliminate any access contention problems due to this active/passive issue.
<unc27932@.yahoo.com> wrote in message
news:1180110598.547933.90790@.o5g2000hsb.googlegrou ps.com...
> Hmmmm - that's what I was afraid the answer would be. I was hoping
> there was a magical, documented, easy to implement solution.
> There will be one and only one active record at any given time. There
> may be 1-n history records. But realistically I doubt there would be
> less than 15 history records associated with any given record. And
> probably < 10% of all records would have any history at all. Majority
> would be fresh inserts.
> On May 25, 12:10 pm, "Quentin Ran" <remove_this_qr...@.yahoo.com>
> wrote:
>
|||Quentin Ran wrote:
> I'd go with the active/passive flag. Sounds at most the inactive records
> would be as many as the active records, if not drastically less. Putting
> adequate index (on the columns PK and that states "same group") will almost
> eliminate any access contention problems due to this active/passive issue.
I would also consider having a separate history table instead of
active/inactive flag.
Since the rollover/rollback operations are expected to happen
significantly less often than active record access/update, keeping the
history in a separate table may make sense.
Y.S.

> <unc27932@.yahoo.com> wrote in message
> news:1180110598.547933.90790@.o5g2000hsb.googlegrou ps.com...
>

data design question - record versioning

Wanted to get feedback from everyone. Current project requires the
need on certain key tables to keep versions of records, and to
possibly use those older versions to selectively rollback. The
business needs the ability to selectively rollback everything that
happened in the data load last Wednesday, for example, but keep all
data loads since then. So I'm exploring history/auditing tables,
filled by triggers, but it seems like a huge pain. I also saw an
article by Ben Allfree ( http://www.codeproject.com/useritems/LisRecordVersioning.asp
) on codeproject that was interesting, but was conceptually quite
different, with some detractors in the feedback. Surely this is a
common issue. What are everyone's thoughts?Common issue, no common answer. I'd say it is dependent on your application
(just like we DBA like to put it). The ratio of active and inactive records
plays an important role in here.
You may also consider set up (indexed) view to help address the issue.
Quentin
"CoreyB" <unc27932@.yahoo.com> wrote in message
news:1180029403.355004.23520@.q69g2000hsb.googlegroups.com...
> Wanted to get feedback from everyone. Current project requires the
> need on certain key tables to keep versions of records, and to
> possibly use those older versions to selectively rollback. The
> business needs the ability to selectively rollback everything that
> happened in the data load last Wednesday, for example, but keep all
> data loads since then. So I'm exploring history/auditing tables,
> filled by triggers, but it seems like a huge pain. I also saw an
> article by Ben Allfree (
> http://www.codeproject.com/useritems/LisRecordVersioning.asp
> ) on codeproject that was interesting, but was conceptually quite
> different, with some detractors in the feedback. Surely this is a
> common issue. What are everyone's thoughts?
>|||Hmmmm - that's what I was afraid the answer would be. I was hoping
there was a magical, documented, easy to implement solution.
There will be one and only one active record at any given time. There
may be 1-n history records. But realistically I doubt there would be
less than 15 history records associated with any given record. And
probably < 10% of all records would have any history at all. Majority
would be fresh inserts.
On May 25, 12:10 pm, "Quentin Ran" <remove_this_qr...@.yahoo.com>
wrote:
> Common issue, no common answer. I'd say it is dependent on your application
> (just like we DBA like to put it). The ratio of active and inactive records
> plays an important role in here.
> You may also consider set up (indexed) view to help address the issue.
> Quentin
> "CoreyB" <unc27...@.yahoo.com> wrote in message
> news:1180029403.355004.23520@.q69g2000hsb.googlegroups.com...
>
> > Wanted to get feedback from everyone. Current project requires the
> > need on certain key tables to keep versions of records, and to
> > possibly use those older versions to selectively rollback. The
> > business needs the ability to selectively rollback everything that
> > happened in the data load last Wednesday, for example, but keep all
> > data loads since then. So I'm exploring history/auditing tables,
> > filled by triggers, but it seems like a huge pain. I also saw an
> > article by Ben Allfree (
> >http://www.codeproject.com/useritems/LisRecordVersioning.asp
> > ) on codeproject that was interesting, but was conceptually quite
> > different, with some detractors in the feedback. Surely this is a
> > common issue. What are everyone's thoughts... Hide quoted text -
> - Show quoted text -|||I'd go with the active/passive flag. Sounds at most the inactive records
would be as many as the active records, if not drastically less. Putting
adequate index (on the columns PK and that states "same group") will almost
eliminate any access contention problems due to this active/passive issue.
<unc27932@.yahoo.com> wrote in message
news:1180110598.547933.90790@.o5g2000hsb.googlegroups.com...
> Hmmmm - that's what I was afraid the answer would be. I was hoping
> there was a magical, documented, easy to implement solution.
> There will be one and only one active record at any given time. There
> may be 1-n history records. But realistically I doubt there would be
> less than 15 history records associated with any given record. And
> probably < 10% of all records would have any history at all. Majority
> would be fresh inserts.
> On May 25, 12:10 pm, "Quentin Ran" <remove_this_qr...@.yahoo.com>
> wrote:
>> Common issue, no common answer. I'd say it is dependent on your
>> application
>> (just like we DBA like to put it). The ratio of active and inactive
>> records
>> plays an important role in here.
>> You may also consider set up (indexed) view to help address the issue.
>> Quentin
>> "CoreyB" <unc27...@.yahoo.com> wrote in message
>> news:1180029403.355004.23520@.q69g2000hsb.googlegroups.com...
>>
>> > Wanted to get feedback from everyone. Current project requires the
>> > need on certain key tables to keep versions of records, and to
>> > possibly use those older versions to selectively rollback. The
>> > business needs the ability to selectively rollback everything that
>> > happened in the data load last Wednesday, for example, but keep all
>> > data loads since then. So I'm exploring history/auditing tables,
>> > filled by triggers, but it seems like a huge pain. I also saw an
>> > article by Ben Allfree (
>> >http://www.codeproject.com/useritems/LisRecordVersioning.asp
>> > ) on codeproject that was interesting, but was conceptually quite
>> > different, with some detractors in the feedback. Surely this is a
>> > common issue. What are everyone's thoughts... Hide quoted text -
>> - Show quoted text -
>|||Quentin Ran wrote:
> I'd go with the active/passive flag. Sounds at most the inactive records
> would be as many as the active records, if not drastically less. Putting
> adequate index (on the columns PK and that states "same group") will almost
> eliminate any access contention problems due to this active/passive issue.
I would also consider having a separate history table instead of
active/inactive flag.
Since the rollover/rollback operations are expected to happen
significantly less often than active record access/update, keeping the
history in a separate table may make sense.
Y.S.
> <unc27932@.yahoo.com> wrote in message
> news:1180110598.547933.90790@.o5g2000hsb.googlegroups.com...
>> Hmmmm - that's what I was afraid the answer would be. I was hoping
>> there was a magical, documented, easy to implement solution.
>> There will be one and only one active record at any given time. There
>> may be 1-n history records. But realistically I doubt there would be
>> less than 15 history records associated with any given record. And
>> probably < 10% of all records would have any history at all. Majority
>> would be fresh inserts.
>> On May 25, 12:10 pm, "Quentin Ran" <remove_this_qr...@.yahoo.com>
>> wrote:
>> Common issue, no common answer. I'd say it is dependent on your
>> application
>> (just like we DBA like to put it). The ratio of active and inactive
>> records
>> plays an important role in here.
>> You may also consider set up (indexed) view to help address the issue.
>> Quentin
>> "CoreyB" <unc27...@.yahoo.com> wrote in message
>> news:1180029403.355004.23520@.q69g2000hsb.googlegroups.com...
>>
>> Wanted to get feedback from everyone. Current project requires the
>> need on certain key tables to keep versions of records, and to
>> possibly use those older versions to selectively rollback. The
>> business needs the ability to selectively rollback everything that
>> happened in the data load last Wednesday, for example, but keep all
>> data loads since then. So I'm exploring history/auditing tables,
>> filled by triggers, but it seems like a huge pain. I also saw an
>> article by Ben Allfree (
>> http://www.codeproject.com/useritems/LisRecordVersioning.asp
>> ) on codeproject that was interesting, but was conceptually quite
>> different, with some detractors in the feedback. Surely this is a
>> common issue. What are everyone's thoughts... Hide quoted text -
>> - Show quoted text -
>

Saturday, February 25, 2012

Data aggregation, syhcronization, and search

Hello All,
I know that this problem is not SQL Server specific but I decided to
post it here because it is a problem that I am certain many of you
expericenced (or at least thought about).
In my system, I have a central SQL Server Database and number of
remote "data providers". Some of these data providers are other SQL
Servers and others are other databases or other data provider services
that expose a database-like interface.
We provide two levels of search capability in the system. The user can
search either the central database, or search one specific data
provider. One of the new requirements, was to provide a global search,
which would search the central database and all the data providers at
once.
Performing a distributed is prohibitive in our scenario because we
have many data providers and they remotelly located. To minimize the
search time, my first reaction was to create a copy of the data from
the remote providers on the central database and keep it in synch with
the data in the remote data providers. By doing this, I would expect
to simplify the search by having simply a search in the central
database.
However, a number of new question concerning data synchronization
arise:
1. How can I keep the data in the central db in synch with the remote
data providers? (If they all were SQL Server databases, I would use
merger replication...but they aren't?)
2. How to handle disconnects? If the remote provider disconnets and
reconnects, some of the data might be stale. In pronciple, I could
take a new snapshot and keep the data in synch from that point.
However, this is very expensive!! Do you know of any other techniques
for data synchronization that would minimize network traffic?
3. Is this a good approach? What do you think?
Your opinion is gratly appreciated.
Kind regards
CD
Hi
"crbd98@.yahoo.com" wrote:

> Hello All,
> I know that this problem is not SQL Server specific but I decided to
> post it here because it is a problem that I am certain many of you
> expericenced (or at least thought about).
> In my system, I have a central SQL Server Database and number of
> remote "data providers". Some of these data providers are other SQL
> Servers and others are other databases or other data provider services
> that expose a database-like interface.
> We provide two levels of search capability in the system. The user can
> search either the central database, or search one specific data
> provider. One of the new requirements, was to provide a global search,
> which would search the central database and all the data providers at
> once.
> Performing a distributed is prohibitive in our scenario because we
> have many data providers and they remotelly located. To minimize the
> search time, my first reaction was to create a copy of the data from
> the remote providers on the central database and keep it in synch with
> the data in the remote data providers. By doing this, I would expect
> to simplify the search by having simply a search in the central
> database.
> However, a number of new question concerning data synchronization
> arise:
> 1. How can I keep the data in the central db in synch with the remote
> data providers? (If they all were SQL Server databases, I would use
> merger replication...but they aren't?)
> 2. How to handle disconnects? If the remote provider disconnets and
> reconnects, some of the data might be stale. In pronciple, I could
> take a new snapshot and keep the data in synch from that point.
> However, this is very expensive!! Do you know of any other techniques
> for data synchronization that would minimize network traffic?
> 3. Is this a good approach? What do you think?
> Your opinion is gratly appreciated.
> Kind regards
> CD
>
You don't say how you currently search the remote data providers! It may be
more acceptable for the users if the global search initially searched the
central database and returned the results and then searched the remote
databases which would return something to the user quicker and reduce the
need to speed up the remote searches.
If you held the data centrally, you would also need to know how much latency
would be acceptable for the data, if you could get away with uploading once a
day out of hours then this could be an easier solution to implement.
John
|||Answers inline.
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
<crbd98@.yahoo.com> wrote in message
news:1176094164.164628.321030@.q75g2000hsh.googlegr oups.com...
> Hello All,
> I know that this problem is not SQL Server specific but I decided to
> post it here because it is a problem that I am certain many of you
> expericenced (or at least thought about).
> In my system, I have a central SQL Server Database and number of
> remote "data providers". Some of these data providers are other SQL
> Servers and others are other databases or other data provider services
> that expose a database-like interface.
> We provide two levels of search capability in the system. The user can
> search either the central database, or search one specific data
> provider. One of the new requirements, was to provide a global search,
> which would search the central database and all the data providers at
> once.
> Performing a distributed is prohibitive in our scenario because we
> have many data providers and they remotelly located. To minimize the
> search time, my first reaction was to create a copy of the data from
> the remote providers on the central database and keep it in synch with
> the data in the remote data providers. By doing this, I would expect
> to simplify the search by having simply a search in the central
> database.
> However, a number of new question concerning data synchronization
> arise:
> 1. How can I keep the data in the central db in synch with the remote
> data providers? (If they all were SQL Server databases, I would use
> merger replication...but they aren't?)
I would use transactional replication to replicate the remote SQL Servers to
the central location. For the non-SQL Server data providers depending on the
amount of changes, the volume of data and whether you can track changes you
should be able to write something that will bring the data locally.
> 2. How to handle disconnects? If the remote provider disconnets and
> reconnects, some of the data might be stale. In pronciple, I could
> take a new snapshot and keep the data in synch from that point.
> However, this is very expensive!! Do you know of any other techniques
> for data synchronization that would minimize network traffic?
You need to implement some method of change tracking so only the changes
will move each time rather than the entire data set.
> 3. Is this a good approach? What do you think?
>
The network hop is always problematic and moving data locally is one
approach to avoid it. Another option might be to move all providers locally
and then have the clients use a remote access solution to access the central
server/repository. This will work well if your links are stable and well
connected.

> Your opinion is gratly appreciated.
> Kind regards
> CD
>

Friday, February 17, 2012

Cutting to a certain word

Hi there, i need to know how to cut a string to the nearest word. For example, i've got an article and i need to extract just a part of the beginning, i could use LEFT([content], 250) but there is little chance this will cut on a word. Therefore i need to know if there is a function that will cut to the nearest word in T-SQL or i will simply put a summary field in the database. (I prefer to generate the summary on the fly if possible)

You could use CHARINDEX.

SELECT LEFT(column, CHARINDEX(' ', column) ) FROM yourTable

|||

I just looked up what you said and this doesn't make sense...

I didn't even try but i know what this will result in... Returns the first word of a column...

But i guess if i provide a startAT value(param 3) to charindex i can start the search at 250 characters and it will cut accordingly...

|||

Yes, this will always give you a full word:

Declare @.tvarchar(100), @.startatintSET @.t ='this is a good test and a long sentence to test end of word'SET @.startat = 33SELECT @.t,left(@.t,@.startat + CHARINDEX(' ',right(@.t,len(@.t) - @.startat)))