Showing posts with label record. Show all posts
Showing posts with label record. Show all posts

Sunday, March 25, 2012

Data export to individual files

Hello,
I am looking to T-SQL extract records from a table and have each record
stored in individual .eml files. A query based DTS doesn't let me
specify what extension my files will have AND I'm unsure how to have
each record written to a separate file? Any ideas on how I can pull
this off?
Much appreciated,
Prpryan75 wrote:
> Hello,
> I am looking to T-SQL extract records from a table and have each record
> stored in individual .eml files. A query based DTS doesn't let me
> specify what extension my files will have AND I'm unsure how to have
> each record written to a separate file? Any ideas on how I can pull
> this off?
> Much appreciated,
> Pr
>
A few possibilities:
1. Use OSQL to run the query and pipe the output to the desired file
2. Use xp_cmdshell to issue DOS "ECHO" commands to write the desired file
3. Use a combination of xp_cmdshell and OPENROWSET to create and then
write the desired file
Tracy McKibben
MCDBA
http://www.realsqlguy.com

Monday, March 19, 2012

data design question - record versioning

Wanted to get feedback from everyone. Current project requires the
need on certain key tables to keep versions of records, and to
possibly use those older versions to selectively rollback. The
business needs the ability to selectively rollback everything that
happened in the data load last Wednesday, for example, but keep all
data loads since then. So I'm exploring history/auditing tables,
filled by triggers, but it seems like a huge pain. I also saw an
article by Ben Allfree ( http://www.codeproject.com/useritem...ing.
asp
) on codeproject that was interesting, but was conceptually quite
different, with some detractors in the feedback. Surely this is a
common issue. What are everyone's thoughts?Common issue, no common answer. I'd say it is dependent on your application
(just like we DBA like to put it). The ratio of active and inactive records
plays an important role in here.
You may also consider set up (indexed) view to help address the issue.
Quentin
"CoreyB" <unc27932@.yahoo.com> wrote in message
news:1180029403.355004.23520@.q69g2000hsb.googlegroups.com...
> Wanted to get feedback from everyone. Current project requires the
> need on certain key tables to keep versions of records, and to
> possibly use those older versions to selectively rollback. The
> business needs the ability to selectively rollback everything that
> happened in the data load last Wednesday, for example, but keep all
> data loads since then. So I'm exploring history/auditing tables,
> filled by triggers, but it seems like a huge pain. I also saw an
> article by Ben Allfree (
> http://www.codeproject.com/useritem...dVersioning.asp
> ) on codeproject that was interesting, but was conceptually quite
> different, with some detractors in the feedback. Surely this is a
> common issue. What are everyone's thoughts?
>|||Hmmmm - that's what I was afraid the answer would be. I was hoping
there was a magical, documented, easy to implement solution.
There will be one and only one active record at any given time. There
may be 1-n history records. But realistically I doubt there would be
less than 15 history records associated with any given record. And
probably < 10% of all records would have any history at all. Majority
would be fresh inserts.
On May 25, 12:10 pm, "Quentin Ran" <remove_this_qr...@.yahoo.com>
wrote:
> Common issue, no common answer. I'd say it is dependent on your applicati
on
> (just like we DBA like to put it). The ratio of active and inactive recor
ds
> plays an important role in here.
> You may also consider set up (indexed) view to help address the issue.
> Quentin
> "CoreyB" <unc27...@.yahoo.com> wrote in message
> news:1180029403.355004.23520@.q69g2000hsb.googlegroups.com...
>
>
> - Show quoted text -|||I'd go with the active/passive flag. Sounds at most the inactive records
would be as many as the active records, if not drastically less. Putting
adequate index (on the columns PK and that states "same group") will almost
eliminate any access contention problems due to this active/passive issue.
<unc27932@.yahoo.com> wrote in message
news:1180110598.547933.90790@.o5g2000hsb.googlegroups.com...
> Hmmmm - that's what I was afraid the answer would be. I was hoping
> there was a magical, documented, easy to implement solution.
> There will be one and only one active record at any given time. There
> may be 1-n history records. But realistically I doubt there would be
> less than 15 history records associated with any given record. And
> probably < 10% of all records would have any history at all. Majority
> would be fresh inserts.
> On May 25, 12:10 pm, "Quentin Ran" <remove_this_qr...@.yahoo.com>
> wrote:
>|||Quentin Ran wrote:
> I'd go with the active/passive flag. Sounds at most the inactive records
> would be as many as the active records, if not drastically less. Putting
> adequate index (on the columns PK and that states "same group") will almos
t
> eliminate any access contention problems due to this active/passive issue.
I would also consider having a separate history table instead of
active/inactive flag.
Since the rollover/rollback operations are expected to happen
significantly less often than active record access/update, keeping the
history in a separate table may make sense.
Y.S.

> <unc27932@.yahoo.com> wrote in message
> news:1180110598.547933.90790@.o5g2000hsb.googlegroups.com...
>

data design question - record versioning

Wanted to get feedback from everyone. Current project requires the
need on certain key tables to keep versions of records, and to
possibly use those older versions to selectively rollback. The
business needs the ability to selectively rollback everything that
happened in the data load last Wednesday, for example, but keep all
data loads since then. So I'm exploring history/auditing tables,
filled by triggers, but it seems like a huge pain. I also saw an
article by Ben Allfree ( http://www.codeproject.com/useritems/LisRecordVersioning.asp
) on codeproject that was interesting, but was conceptually quite
different, with some detractors in the feedback. Surely this is a
common issue. What are everyone's thoughts?
Common issue, no common answer. I'd say it is dependent on your application
(just like we DBA like to put it). The ratio of active and inactive records
plays an important role in here.
You may also consider set up (indexed) view to help address the issue.
Quentin
"CoreyB" <unc27932@.yahoo.com> wrote in message
news:1180029403.355004.23520@.q69g2000hsb.googlegro ups.com...
> Wanted to get feedback from everyone. Current project requires the
> need on certain key tables to keep versions of records, and to
> possibly use those older versions to selectively rollback. The
> business needs the ability to selectively rollback everything that
> happened in the data load last Wednesday, for example, but keep all
> data loads since then. So I'm exploring history/auditing tables,
> filled by triggers, but it seems like a huge pain. I also saw an
> article by Ben Allfree (
> http://www.codeproject.com/useritems/LisRecordVersioning.asp
> ) on codeproject that was interesting, but was conceptually quite
> different, with some detractors in the feedback. Surely this is a
> common issue. What are everyone's thoughts?
>
|||Hmmmm - that's what I was afraid the answer would be. I was hoping
there was a magical, documented, easy to implement solution.
There will be one and only one active record at any given time. There
may be 1-n history records. But realistically I doubt there would be
less than 15 history records associated with any given record. And
probably < 10% of all records would have any history at all. Majority
would be fresh inserts.
On May 25, 12:10 pm, "Quentin Ran" <remove_this_qr...@.yahoo.com>
wrote:
> Common issue, no common answer. I'd say it is dependent on your application
> (just like we DBA like to put it). The ratio of active and inactive records
> plays an important role in here.
> You may also consider set up (indexed) view to help address the issue.
> Quentin
> "CoreyB" <unc27...@.yahoo.com> wrote in message
> news:1180029403.355004.23520@.q69g2000hsb.googlegro ups.com...
>
>
> - Show quoted text -
|||I'd go with the active/passive flag. Sounds at most the inactive records
would be as many as the active records, if not drastically less. Putting
adequate index (on the columns PK and that states "same group") will almost
eliminate any access contention problems due to this active/passive issue.
<unc27932@.yahoo.com> wrote in message
news:1180110598.547933.90790@.o5g2000hsb.googlegrou ps.com...
> Hmmmm - that's what I was afraid the answer would be. I was hoping
> there was a magical, documented, easy to implement solution.
> There will be one and only one active record at any given time. There
> may be 1-n history records. But realistically I doubt there would be
> less than 15 history records associated with any given record. And
> probably < 10% of all records would have any history at all. Majority
> would be fresh inserts.
> On May 25, 12:10 pm, "Quentin Ran" <remove_this_qr...@.yahoo.com>
> wrote:
>
|||Quentin Ran wrote:
> I'd go with the active/passive flag. Sounds at most the inactive records
> would be as many as the active records, if not drastically less. Putting
> adequate index (on the columns PK and that states "same group") will almost
> eliminate any access contention problems due to this active/passive issue.
I would also consider having a separate history table instead of
active/inactive flag.
Since the rollover/rollback operations are expected to happen
significantly less often than active record access/update, keeping the
history in a separate table may make sense.
Y.S.

> <unc27932@.yahoo.com> wrote in message
> news:1180110598.547933.90790@.o5g2000hsb.googlegrou ps.com...
>

data design question - record versioning

Wanted to get feedback from everyone. Current project requires the
need on certain key tables to keep versions of records, and to
possibly use those older versions to selectively rollback. The
business needs the ability to selectively rollback everything that
happened in the data load last Wednesday, for example, but keep all
data loads since then. So I'm exploring history/auditing tables,
filled by triggers, but it seems like a huge pain. I also saw an
article by Ben Allfree ( http://www.codeproject.com/useritems/LisRecordVersioning.asp
) on codeproject that was interesting, but was conceptually quite
different, with some detractors in the feedback. Surely this is a
common issue. What are everyone's thoughts?Common issue, no common answer. I'd say it is dependent on your application
(just like we DBA like to put it). The ratio of active and inactive records
plays an important role in here.
You may also consider set up (indexed) view to help address the issue.
Quentin
"CoreyB" <unc27932@.yahoo.com> wrote in message
news:1180029403.355004.23520@.q69g2000hsb.googlegroups.com...
> Wanted to get feedback from everyone. Current project requires the
> need on certain key tables to keep versions of records, and to
> possibly use those older versions to selectively rollback. The
> business needs the ability to selectively rollback everything that
> happened in the data load last Wednesday, for example, but keep all
> data loads since then. So I'm exploring history/auditing tables,
> filled by triggers, but it seems like a huge pain. I also saw an
> article by Ben Allfree (
> http://www.codeproject.com/useritems/LisRecordVersioning.asp
> ) on codeproject that was interesting, but was conceptually quite
> different, with some detractors in the feedback. Surely this is a
> common issue. What are everyone's thoughts?
>|||Hmmmm - that's what I was afraid the answer would be. I was hoping
there was a magical, documented, easy to implement solution.
There will be one and only one active record at any given time. There
may be 1-n history records. But realistically I doubt there would be
less than 15 history records associated with any given record. And
probably < 10% of all records would have any history at all. Majority
would be fresh inserts.
On May 25, 12:10 pm, "Quentin Ran" <remove_this_qr...@.yahoo.com>
wrote:
> Common issue, no common answer. I'd say it is dependent on your application
> (just like we DBA like to put it). The ratio of active and inactive records
> plays an important role in here.
> You may also consider set up (indexed) view to help address the issue.
> Quentin
> "CoreyB" <unc27...@.yahoo.com> wrote in message
> news:1180029403.355004.23520@.q69g2000hsb.googlegroups.com...
>
> > Wanted to get feedback from everyone. Current project requires the
> > need on certain key tables to keep versions of records, and to
> > possibly use those older versions to selectively rollback. The
> > business needs the ability to selectively rollback everything that
> > happened in the data load last Wednesday, for example, but keep all
> > data loads since then. So I'm exploring history/auditing tables,
> > filled by triggers, but it seems like a huge pain. I also saw an
> > article by Ben Allfree (
> >http://www.codeproject.com/useritems/LisRecordVersioning.asp
> > ) on codeproject that was interesting, but was conceptually quite
> > different, with some detractors in the feedback. Surely this is a
> > common issue. What are everyone's thoughts... Hide quoted text -
> - Show quoted text -|||I'd go with the active/passive flag. Sounds at most the inactive records
would be as many as the active records, if not drastically less. Putting
adequate index (on the columns PK and that states "same group") will almost
eliminate any access contention problems due to this active/passive issue.
<unc27932@.yahoo.com> wrote in message
news:1180110598.547933.90790@.o5g2000hsb.googlegroups.com...
> Hmmmm - that's what I was afraid the answer would be. I was hoping
> there was a magical, documented, easy to implement solution.
> There will be one and only one active record at any given time. There
> may be 1-n history records. But realistically I doubt there would be
> less than 15 history records associated with any given record. And
> probably < 10% of all records would have any history at all. Majority
> would be fresh inserts.
> On May 25, 12:10 pm, "Quentin Ran" <remove_this_qr...@.yahoo.com>
> wrote:
>> Common issue, no common answer. I'd say it is dependent on your
>> application
>> (just like we DBA like to put it). The ratio of active and inactive
>> records
>> plays an important role in here.
>> You may also consider set up (indexed) view to help address the issue.
>> Quentin
>> "CoreyB" <unc27...@.yahoo.com> wrote in message
>> news:1180029403.355004.23520@.q69g2000hsb.googlegroups.com...
>>
>> > Wanted to get feedback from everyone. Current project requires the
>> > need on certain key tables to keep versions of records, and to
>> > possibly use those older versions to selectively rollback. The
>> > business needs the ability to selectively rollback everything that
>> > happened in the data load last Wednesday, for example, but keep all
>> > data loads since then. So I'm exploring history/auditing tables,
>> > filled by triggers, but it seems like a huge pain. I also saw an
>> > article by Ben Allfree (
>> >http://www.codeproject.com/useritems/LisRecordVersioning.asp
>> > ) on codeproject that was interesting, but was conceptually quite
>> > different, with some detractors in the feedback. Surely this is a
>> > common issue. What are everyone's thoughts... Hide quoted text -
>> - Show quoted text -
>|||Quentin Ran wrote:
> I'd go with the active/passive flag. Sounds at most the inactive records
> would be as many as the active records, if not drastically less. Putting
> adequate index (on the columns PK and that states "same group") will almost
> eliminate any access contention problems due to this active/passive issue.
I would also consider having a separate history table instead of
active/inactive flag.
Since the rollover/rollback operations are expected to happen
significantly less often than active record access/update, keeping the
history in a separate table may make sense.
Y.S.
> <unc27932@.yahoo.com> wrote in message
> news:1180110598.547933.90790@.o5g2000hsb.googlegroups.com...
>> Hmmmm - that's what I was afraid the answer would be. I was hoping
>> there was a magical, documented, easy to implement solution.
>> There will be one and only one active record at any given time. There
>> may be 1-n history records. But realistically I doubt there would be
>> less than 15 history records associated with any given record. And
>> probably < 10% of all records would have any history at all. Majority
>> would be fresh inserts.
>> On May 25, 12:10 pm, "Quentin Ran" <remove_this_qr...@.yahoo.com>
>> wrote:
>> Common issue, no common answer. I'd say it is dependent on your
>> application
>> (just like we DBA like to put it). The ratio of active and inactive
>> records
>> plays an important role in here.
>> You may also consider set up (indexed) view to help address the issue.
>> Quentin
>> "CoreyB" <unc27...@.yahoo.com> wrote in message
>> news:1180029403.355004.23520@.q69g2000hsb.googlegroups.com...
>>
>> Wanted to get feedback from everyone. Current project requires the
>> need on certain key tables to keep versions of records, and to
>> possibly use those older versions to selectively rollback. The
>> business needs the ability to selectively rollback everything that
>> happened in the data load last Wednesday, for example, but keep all
>> data loads since then. So I'm exploring history/auditing tables,
>> filled by triggers, but it seems like a huge pain. I also saw an
>> article by Ben Allfree (
>> http://www.codeproject.com/useritems/LisRecordVersioning.asp
>> ) on codeproject that was interesting, but was conceptually quite
>> different, with some detractors in the feedback. Surely this is a
>> common issue. What are everyone's thoughts... Hide quoted text -
>> - Show quoted text -
>

Sunday, March 11, 2012

data conversion - numeric to string loosing precision

Hi All,

i'm using a "data conversion" object to convert a numeric field to a string just before i save the record set to the database.

the problem is when this numeric field is > 0 it looses the precision on its decimal value.

example, if numeric value is 0.32

after converting this to a string, the new value will be : .32

it's lost the 0 infront of it. i can't do this converion in the query level because it's a derived field, so i need to convert it to a string before stroing it.

when converting to string i'm using the code page 1252 (ANSI - Latin I). i also tried with unicode string both looses this 0 infront.

can any one help on this?

Is your ultimate database target for the numeric data type a character based column, hence the need to retain the leading zeros?

If that is the case, retaining leading zeros can be accomplished with a derived column transform, rather than a data conversion transform. There you have access to the SSIS expression language, with a relatively standard, if meager, set of string functions, as well as type casts.

Another alternative for pretty much an data type conversion is a Script Transform, which will afford you the full power of the .NET framework, including in this case custom numeric formats for types destinated for strings.|||

Hi jaegd,

i tried that, but no luck. i use a derived table and this is my expression

ISNULL(investmentPercentage) ? 0.00 : investmentPercentage

here investmentPercentage is a numeric (15,2). i add the above expression as a new column (string) (8 chars). but still it looses the 0

.00 when it's null

.32 when it;s 0.32

any comments

AJ

|||

Use the Script Task, and follow the instructions in this link for numeric format specifiers for strings. You need to do something similar to this...

Dim number as float

number.ToString("D")

http://msdn.microsoft.com/library/default.asp?url=/library/en-us/cpguide/html/cpconstandardnumericformatstrings.asp

|||

In the derived column, you can still acheive it with conditional logic The script as mentioned above is cleaner, but if you want to avoid adding that, you can try an expression like:

(investmentPercentage < 1 ? "0" : "") + (DT_WSTR, 20)investmentPercentage

This expression will add a "0" to the beginning of the string if the number is less than one (which is the only time the zero is dropped). When the number is greater than or equal to 1, an empty string is added.

NOTE: this does not take into account negative values. If investmentPercentage can be negative, you will need to tweak the expression a little (or post here, and I can help with that).

Thanks
Mark

|||

Hi Mark,

thanks for that, but it sort of half answer my question.

yes investmentPercentage can be negative or can even be null. i was wondering if there's an eaisier way. i've got a lot of cloumns like this, writing an expression for each of them this long can be tedious. is this how you would "tweak" it?

ISNULL(investmentPercentage) ? "0.00" : (investmentPercentage < 1 && investmentPercentage >-1 ? "0" : "") + (DT_WSTR, 8)investmentPercentage

|||

You might find it easier to use string formatting in the script component... I can't think of any other way that would be better.

As for tweaking the expression, you will have to do some extra things to take care of the negative sign (so the prepended zero doesn't end up before the negative sign). The following expression adds a case for values between -1 and 0 to handle that:

ISNULL(investmentPercentage) ? (DT_WSTR,8)"0.00" : investmentPercentage < 0 && investmentPercentage > -1 ? "-0" + SUBSTRING((DT_WSTR,8)investmentPercentage, 2, 8) : ((investmentPercentage >= 0 && investmentPercentage < 1 ? "0" : "") + (DT_WSTR,8)investmentPercentage

Let me know if this does the trick for you.

Thanks
Mark

Thursday, March 8, 2012

data cleaning

Hey gang, I have 2 100,000 record tables, with a ID & address field. (I've exclude all the other field for now) I need to make sure the ID field in table A is equal to table B and that the address in table A is equal to table B. If the Id's or address do not match create a separate table. Any good ideascreate table table3
( ID integer
, address varchar(255)
)

insert into table3
select ID
, address
from table1
where not exists
( select * from table2
where ID = table1.ID )
union all
select ID
, address
from table2
where not exists
( select * from table1
where ID = table2.ID )
union all
select table1.ID
, table1.address
from table1
inner
join table2
on table1.ID = table2.ID
where table1.address <> table2.address
union all
select table2.ID
, table2.address
from table1
inner
join table2
on table1.ID = table2.ID
where table1.address <> table2.address|||USE Northwind
GO

SET NOCOUNT ON
CREATE TABLE Table1([ID] int, address varchar(255))
CREATE TABLE Table2([ID] int, address varchar(255))
GO

INSERT INTO Table1([ID],address)
SELECT 1, 'Here' UNION ALL
SELECT 2, 'There' UNION ALL
SELECT 3, 'And Everywhere'

INSERT INTO Table2([ID],address)
SELECT 0, 'Here' UNION ALL
SELECT 2, 'There' UNION ALL
SELECT 3, 'And Everywheres'
GO

SELECT [ID], address
INTO Table3
FROM (
SELECT [ID]
, address
FROM table1
UNION ALL
SELECT [ID]
, address
FROM table2) AS XXX
GROUP BY [ID], address
HAVING COUNT(*) = 1
GO

SELECT * FROM Table3
GO

SET NOCOUNT OFF
DROP TABLE Table1
DROP TABLE Table2
DROP TABLE Table3
GO

Wednesday, March 7, 2012

Data base structure question

Hi,
I have a data base which hold a record for each contact in exchange
the records contain Status column
I have been asked to allow the user to do a search on both the status
and some other properties of the contact(currently FirstName and LastName)
in order to do that I am going to cache the FirstName and
LastName(searchable properties) of each contact using WebDAV
in my data base.
since in the future the searchable properties can be changed I am thinking
to save them in another table,here is my structure
Table Items
Id Status
1 Imported
Table KeyWords
ItemId Name Value
1 FirstName Julia
1 LastName Adriano
I would like to ask if this structure won't hurt performance espcially since
i am doing paging
using http://rosca.net/writing/articles/serverside_paging.asp sample
It is basically an asp.net intranet application with a single user
BTW:I couldnot uses a distribute query,or store anything in exchange
Thanks in advance
Hi Julia,
This is not really an Access question, but more of an Exchange one, but
since I've done some work in both, I can perhaps get you pointed in the
right direction. Using the Outlook client for Exchange, there are 4 custom
fields which you can use to hold any data you wish. You can also use any
other field which is unlikely to be used. Just remember to document it well.
Exchange identifies every entry of any kind using a 128 character GUID
string which in Exchange is called the EntryID. You can query the MAPI
namespace to get that value. Now, having that value in your database will
automatically define the record as being imported (at least from Exchange).
The structure you have is OK, but it requires user input when it would be
just as easy to get a value from Exchange.
Ask how to implement this in one of the Exchange newsgroups.
Arvin Meyer, MCP, MVP
Microsoft Access
Free Access downloads:
http://www.datastrat.com
http://www.mvps.org/access
"Julia" <codewizard@.012.net.il> wrote in message
news:%23xGJipMdFHA.3076@.TK2MSFTNGP10.phx.gbl...
>
> Hi,
> I have a data base which hold a record for each contact in exchange
> the records contain Status column
> I have been asked to allow the user to do a search on both the status
> and some other properties of the contact(currently FirstName and LastName)
>
> in order to do that I am going to cache the FirstName and
> LastName(searchable properties) of each contact using WebDAV
> in my data base.
> since in the future the searchable properties can be changed I am thinking
> to save them in another table,here is my structure
> Table Items
> Id Status
> 1 Imported
> Table KeyWords
> ItemId Name Value
> 1 FirstName Julia
> 1 LastName Adriano
> I would like to ask if this structure won't hurt performance espcially
since
> i am doing paging
> using http://rosca.net/writing/articles/serverside_paging.asp sample
> It is basically an asp.net intranet application with a single user
> BTW:I couldnot uses a distribute query,or store anything in exchange
> Thanks in advance
>
|||Thanks,I know how to import it from exchange,actually I am using WebDav not
MAPI
I also familiar with the EntryId(i am using the entryid to identify the
contact) and custom fields,but as I wrote I CANNOT
change anything in exchange schema,actually sometimes it is not exchnage
rather other data base
I want to focus on the structure and performance that's all
Thanks,
"Arvin Meyer [MVP]" <a@.m.com> wrote in message
news:OmAlH$MdFHA.1288@.tk2msftngp13.phx.gbl...
> Hi Julia,
> This is not really an Access question, but more of an Exchange one, but
> since I've done some work in both, I can perhaps get you pointed in the
> right direction. Using the Outlook client for Exchange, there are 4 custom
> fields which you can use to hold any data you wish. You can also use any
> other field which is unlikely to be used. Just remember to document it
well.
> Exchange identifies every entry of any kind using a 128 character GUID
> string which in Exchange is called the EntryID. You can query the MAPI
> namespace to get that value. Now, having that value in your database will
> automatically define the record as being imported (at least from
Exchange).[vbcol=seagreen]
> The structure you have is OK, but it requires user input when it would be
> just as easy to get a value from Exchange.
> Ask how to implement this in one of the Exchange newsgroups.
> --
> Arvin Meyer, MCP, MVP
> Microsoft Access
> Free Access downloads:
> http://www.datastrat.com
> http://www.mvps.org/access
> "Julia" <codewizard@.012.net.il> wrote in message
> news:%23xGJipMdFHA.3076@.TK2MSFTNGP10.phx.gbl...
LastName)[vbcol=seagreen]
thinking
> since
>
|||I see nothing wrong with your structure. If you're using ASP and the MSDE
engine wit a client-side cursor, you may as well be using the JET engine. It
is designed as a client-side cursor database engine and in the respect
alone, it's performance is often better that t6he SS engine. The reason is
that Rushmore technology and JET are designed to send only the specific
indexes asked for, then return the data based on a criteria used with the
index(es). Client-side cursors with the SQL engine return the entire
dataset, irrespective of indexes. If you can perform your work with a
server-side cursor, you will usually get better performance with the
SQL-Server engine.
Arvin Meyer, MCP, MVP
Microsoft Access
Free Access downloads:
http://www.datastrat.com
http://www.mvps.org/access
"Julia" <codewizard@.012.net.il> wrote in message
news:eAYEViNdFHA.2076@.TK2MSFTNGP15.phx.gbl...
> Thanks,I know how to import it from exchange,actually I am using WebDav
not[vbcol=seagreen]
> MAPI
> I also familiar with the EntryId(i am using the entryid to identify the
> contact) and custom fields,but as I wrote I CANNOT
> change anything in exchange schema,actually sometimes it is not exchnage
> rather other data base
>
> I want to focus on the structure and performance that's all
> Thanks,
> "Arvin Meyer [MVP]" <a@.m.com> wrote in message
> news:OmAlH$MdFHA.1288@.tk2msftngp13.phx.gbl...
custom[vbcol=seagreen]
> well.
will[vbcol=seagreen]
> Exchange).
be
> LastName)
> thinking
>
|||Ok,Thanks
"Arvin Meyer [MVP]" <a@.m.com> wrote in message
news:e69$7mOdFHA.3156@.tk2msftngp13.phx.gbl...
> I see nothing wrong with your structure. If you're using ASP and the MSDE
> engine wit a client-side cursor, you may as well be using the JET engine.
It[vbcol=seagreen]
> is designed as a client-side cursor database engine and in the respect
> alone, it's performance is often better that t6he SS engine. The reason is
> that Rushmore technology and JET are designed to send only the specific
> indexes asked for, then return the data based on a criteria used with the
> index(es). Client-side cursors with the SQL engine return the entire
> dataset, irrespective of indexes. If you can perform your work with a
> server-side cursor, you will usually get better performance with the
> SQL-Server engine.
> --
> Arvin Meyer, MCP, MVP
> Microsoft Access
> Free Access downloads:
> http://www.datastrat.com
> http://www.mvps.org/access
> "Julia" <codewizard@.012.net.il> wrote in message
> news:eAYEViNdFHA.2076@.TK2MSFTNGP15.phx.gbl...
> not
but[vbcol=seagreen]
the[vbcol=seagreen]
> custom
any[vbcol=seagreen]
> will
> be
status[vbcol=seagreen]
espcially
>

Friday, February 24, 2012

Data "falling" out of filter failing to remove related records at subscriber

Summarised: Filtered record, changed at Publisher to take out of filter for Subscriber, at Subscriber record removed but related records dont go, bad.. Alternatively, Filtered record, changed at Subscriber to take out of filter, Replication removes all related records from Subscriber db when merge repl runs, good... :-|
Why the difference?
Have Row Filters & Join Filters.
Details: Ive set up Merge replication, with several articles with Row/Join Filters.
When I change a record on the Publication that exists at the Subscriber (that has been filtered/sync etc) that record gets communicated to the Subscriber, which is fine.
However, when I change a record on the server so that the record is no longer valid for that Subscriber because it falls outside the filter - the delete goes to the Subscriber, but its related records on the Subscriber do not get removed.
This is even more confusing, because if I change the record at the Subscriber, so that it falls outside the filter, it gets removed along with its related record.
Ive checked the logic & relations/joins which all appear to be valid. The fact that the Subscriber removes related records when it changes the record suggests that the Joins are valid. Doesnt it?
Is there an option/flag to force server changes to validate related records when sync happens?
Any help much appreciated.
Mr Le.Anybody, anybody? ;-)
Ive checked other posts in here and there appeared to be a couple of other people who had a similar issue, im following those threads to see if they offer a solution or hint.
Thanks for reading.|||Mr Le,
please can you post up a little more info. Are you adding rows on the
subscriber? Are these rows being propagated to the publisher but then not
removed on the subscriber as you'd expect due to the filter? If this is the
case I know of 2 posibilities:
(1) you've done a bulk insert without firing triggers on the subscriber.
(2) you have a filter like 1=2 and have added records while the merge agent
is running.
In each case you can:
run SP_ADDTABLETOCONTENTS then synchronize
run SP_MERGEDUMMYUPDATE for a single row then synchronize
HTH,
Paul Ibison
|||Paul,
Thanks for replying, its much appreciated, hopefully I can add a bit more info to clarify whats happening.
The Subscriber is a Pull-Subscription.
There are Row/Join filters defined for the Publication.
The initial Snapshot took a subset of records from the Publisher to the Subscriber, that were in-line with the filters defined for that Publication.
Test 1
1. The subscriber receives the Snapshot, and all expected records are added.
2. An update to a row at the Publisher is propogated to the Subscriber, where it is deleted because it no longer satisfies the Filters.
* Related records are not removed from the Subscriber though. This is the Problem.
* So we tried to see what would happen if we changed the record at the Subscriber instead of the Publication.
Test 2
1. We reinitialized the Subscriber, the initial Snapshot was then applied to the Subscriber. All valid records go across.
2. This time, we changed the row at the Subscriber, and this change meant that the record no longer satisfied the Filter. Merge Agent ran, and deleted the record on the Subscriber, except this time it also deleted all related records, which is what we expected to happen in the first place.
These "deletes" were not propogated back to the Publisher, which again is what we expected.
* No Bulk updates/inserts took place, other than those caused by the initial Snapshot.
* No records were added whilst the Merge Agent was running.
Hope ive described this well enough.
If you need more detail of the row filters were using and/or join filters, let me know.
Thanks again for your help.|||Mr Le,
thanks for the detailed description. Please can you post up a schema of the
tables involved and details of the joins/filters involved, and I'll
reproduce it here.
Cheers,
Paul