across the following. During a sproc call that moves data from a staging db
to warehouse db, the spid parallelized on all 8 cpus and then sat there with
the following information shown by sys.dm_os_waiting_tasks. Note that all
CPUs were pegged at 100% during this event, although I don't know if this
particular proc was the cause. To my knowledge this was the only activity
on the server at the time however. There wasn't really any other waiting
tasks of note other than the typical system tasks. I was unable to get more
detailed information on the actual blocking tasks tho. It was pretty late
in the night and I wasn't my sharpest.

This is a 2005 box, patched up past SP2 to build 3186. 64bit, 32GB RAM, 4
dual core sprocs no hyperthreading. The I/O subsystem isn't very beefy (7
drive RAID5 set on a middling SAN) and the database totals about 400GB or
so.
There were 8 rows in waiting tasks with the below data:
waiting_task_address: 0x0000000000EDA868
session_id: 57
wait_duration_ms: 193860 (at the time of the DMV grab)
wait_type: CXPACKET
resource_address: 0x00000000801F9A50
blocking_task_address: all were different
blocking_session_id: 57
blocking_exec_context_id: all were different
resource_description: exchangeEvent id=port801f61c0 nodeId=5
Anyone got any ideas what an "exchangeEvent id=port801f61c0 nodeId=5"
resource_description is or means? Is it simply an internal mechanism
related to parallelization and perhaps this query (it was either a huge
insert or update - not sure which, sorry) could be sped up (or at least not
get hung up for an extended period) by specifying a MAXDOP of 1, 2 or maybe
4? Any information or help would be appreciated.
Kevin G. Boles
TheSQLGuru
Indicium Resources, Inc.
kgboles a earthlink dt netHi
Have you checked your missing indexes for anything related to the task?
John
"TheSQLGuru" wrote:
> I was monitoring a client's warehouse build process last night and stumble
d
> across the following. During a sproc call that moves data from a staging
db
> to warehouse db, the spid parallelized on all 8 cpus and then sat there wi
th
> the following information shown by sys.dm_os_waiting_tasks. Note that all
> CPUs were pegged at 100% during this event, although I don't know if this
> particular proc was the cause. To my knowledge this was the only activity
> on the server at the time however. There wasn't really any other waiting
> tasks of note other than the typical system tasks. I was unable to get mo
re
> detailed information on the actual blocking tasks tho. It was pretty late
> in the night and I wasn't my sharpest.

> This is a 2005 box, patched up past SP2 to build 3186. 64bit, 32GB RAM, 4
> dual core sprocs no hyperthreading. The I/O subsystem isn't very beefy (7
> drive RAID5 set on a middling SAN) and the database totals about 400GB or
> so.
>
> There were 8 rows in waiting tasks with the below data:
> waiting_task_address: 0x0000000000EDA868
> session_id: 57
> wait_duration_ms: 193860 (at the time of the DMV grab)
> wait_type: CXPACKET
> resource_address: 0x00000000801F9A50
> blocking_task_address: all were different
> blocking_session_id: 57
> blocking_exec_context_id: all were different
> resource_description: exchangeEvent id=port801f61c0 nodeId=5
>
> Anyone got any ideas what an "exchangeEvent id=port801f61c0 nodeId=5"
> resource_description is or means? Is it simply an internal mechanism
> related to parallelization and perhaps this query (it was either a huge
> insert or update - not sure which, sorry) could be sped up (or at least no
t
> get hung up for an extended period) by specifying a MAXDOP of 1, 2 or mayb
e
> 4? Any information or help would be appreciated.
>
> --
> Kevin G. Boles
> TheSQLGuru
> Indicium Resources, Inc.
> kgboles a earthlink dt net
>
>|||Unfortunately the client rebuilds all indexes each night with their
warehouse rebuild so missing indexes isn't useful.
Kevin G. Boles
TheSQLGuru
Indicium Resources, Inc.
kgboles a earthlink dt net
"John Bell" <jbellnewsposts@.hotmail.com> wrote in message
news:FCF33C96-720B-4F2C-96BD-2E2A8179CFA5@.microsoft.com...[vbcol=seagreen]
> Hi
> Have you checked your missing indexes for anything related to the task?
> John
> "TheSQLGuru" wrote:
>|||Hi
A common cause of parellisation is missing indexes, so unless you can find
the query that is probably causing this you may not be able to solve the
issue. You may want to try SQL profiling the process.
John
"TheSQLGuru" wrote:
> Unfortunately the client rebuilds all indexes each night with their
> warehouse rebuild so missing indexes isn't useful.
> --
> Kevin G. Boles
> TheSQLGuru
> Indicium Resources, Inc.
> kgboles a earthlink dt net
>
> "John Bell" <jbellnewsposts@.hotmail.com> wrote in message
> news:FCF33C96-720B-4F2C-96BD-2E2A8179CFA5@.microsoft.com...
>
>|||On Dec 24, 8:10=A0pm, John Bell <jbellnewspo...@.hotmail.com> wrote:
> Hi
> A common cause of parellisation is missing indexes, so unless you can find=[/vbcol
]
[vbcol=seagreen]
> the query that is probably causing this you may not be able to solve the
> issue. You may want to try SQL profiling the process.
> John
>
> "TheSQLGuru" wrote:
>
>
>
?[vbcol=seagreen]
>
>
>
taging[vbcol=seagreen]
re[vbcol=seagreen]
hat[vbcol=seagreen]
this[vbcol=seagreen]
aiting[vbcol=seagreen]
get[vbcol=seagreen]
ty[vbcol=seagreen]
>
RAM,[vbcol=seagreen]
eefy[vbcol=seagreen]
B or[vbcol=seagreen]
>
>
>
=3D5"[vbcol=seagreen]
sm[vbcol=seagreen]
ge[vbcol=seagreen]
st[vbcol=seagreen]
[vbcol=seagreen]
>
> - Show quoted text -
Hello,
maybe the reason of the problem are the alter index statements for
rebuilding the indeces.
If that is the case you should run it with specifying a MAXDOP
statement.|||I was specifically asking about the "exchangeEvent id=port801f61c0 nodeId=5"
resource_description. I am trying to identify what that means or comes from
to see if there is anything I can do to affect it being the wait.
The query that caused this hits the entire table since it is a load
mechanism. Indexing will not help since a tablescan is more efficient in
such cases.
Kevin G. Boles
TheSQLGuru
Indicium Resources, Inc.
kgboles a earthlink dt net
"John Bell" <jbellnewsposts@.hotmail.com> wrote in message
news:FF9951B2-2ACE-4A72-8996-53F36082D827@.microsoft.com...[vbcol=seagreen]
> Hi
> A common cause of parellisation is missing indexes, so unless you can find
> the query that is probably causing this you may not be able to solve the
> issue. You may want to try SQL profiling the process.
> John
> "TheSQLGuru" wrote:
>|||TheSQLGuru,
It looks like Bart Duncan knows:
http://blogs.msdn.com/bartd/archive...-3.asp
x
If you read all way down to the end, in his last post Bart makes the
following comment: "You're right -- this is a parallel thread deadlock.
The key indicator of this is the fact that the resources involved in the
deadlock (see the "resource-list" section) are not lock resources; they are
"exchangeEvent" resources, instead."
He goes on to suggest how to catch what is happennig by a profiler trace.
Hope that is some help,
RLF
"TheSQLGuru" <kgboles@.earthlink.net> wrote in message
news:13n5da066uv3rce@.corp.supernews.com...
>I was specifically asking about the "exchangeEvent id=port801f61c0
>nodeId=5" resource_description. I am trying to identify what that means or
>comes from to see if there is anything I can do to affect it being the
>wait.
> The query that caused this hits the entire table since it is a load
> mechanism. Indexing will not help since a tablescan is more efficient in
> such cases.
> --
> Kevin G. Boles
> TheSQLGuru
> Indicium Resources, Inc.
> kgboles a earthlink dt net
>
> "John Bell" <jbellnewsposts@.hotmail.com> wrote in message
> news:FF9951B2-2ACE-4A72-8996-53F36082D827@.microsoft.com...
>
No comments:
Post a Comment