Here are some more “under the hood” details in response to the following question:
How is one supposed to interpret the NetTcpBinding.MaxConnections property on the client? My assumption has been that setting this property at the client only allows this number of concurrent connections, and further connection attempts will be queued until an existing connection is released.
MaxConnections for TCP is not a hard and fast limit, but rather a knob on the connections that we will cache in our connection pool. That is, if you set MaxConnections=2, you can still open 4 client channels on the same factory simultaneously. However, when you close all of these channels, we will only keep two of these connections around (subject to IdleTimeout of course) for future channel usage. This helps performance in cases where you are creating and disposing client channels. This knob will also apply to the equivalent usage on the server-side as well (that is, when a server-side channel is closed, if we have less than MaxConnections in our server-side pool we will initiate I/O to look for another new client channel).
The reason that we don’t have a hard and fast limit on your connection usage is that you already can control the connection usage through your usage of the WCF objects. That is, if you don’t want to use more than two connections, don’t create more than two client channels π Any additional knobs at the lower layer would only impede debuggability and predictability.
Note that MaxConnections applies across channels. When sending messages over a single channel, you can only send out one message at a time. That is, your second Send() call will not be initiated until your first Send() completes. In this manner, our TCP binding can guarantee in-order delivery. Also, practically speaking there would be a significant amount of complexity (and overall a negative performance hit) if we allowed interleaving of data from multiple messages, as each “chunk” would need to be annotated with a scatter/gather message marker.
Lastly, all of the above comments apply to TransferMode.Buffered (the default). When using Streaming mode, we “check out” a connection for each in-progress send (and not per-channel). So all the above statements will apply to simultaneous sends rather than simultaneous channels. Streaming TCP is a datagram (not a session-ful) channel, and so simultaneous sends are supported since each send will use a separate TCP connection. This is more similar to HTTP’s usage of TCP connections (where each in-flight request-response pair is using a separate TCP connection).
Hi Kenny,
how I can avoid opening multiple tcp connections when using TransferMode.Streamed ? with many calls to the same service ?
I want to enjoy both worlds: streaming (large data transfer) & control the number of opened connections (if I already have open one…)
In my app , I’m using the same channel for multiple receives , but still I see many connections opened (process explorer – tcp view at both client and server)
Thanks
On the client, the way to control this is to throttle the amount of [Begin]Send calls you make simultaneously. On the server-side, you can control the number of connections through a combination of Service-level throttles and tcpBindingElement.MaxConnections
First, Thanks for your answer.
In my code, I try to:
(In first time)
1. Create channelfactory
2. Create channel
3. Open this channel
4. Make the req-res (stream)
Then in the second call (very little time after)
I am looking if I already have the channel, and if it’s State is opened
In this case I take the channel and use it again to make the req-res
In case than the channel is not opened, I will re-create it, and opened it
what’s happen is when I use this approach , I succeeded to make the all req-res , but , every time I saw new tcp socket connection (in process explorer tcp view) in both sides (client & server)
Play with the binding maxConnections does not help (both sides tests)
Only when moved to TransferMode.Buffered its work using the same socket.
Kenny, I following your blog and I’ll be happy to show you the code π
But via e-mail…
Thanks again
Ron
The reason you don’t see this for buffered is that each buffered channel maps to a single socket. For Streaming, each active request-reply maps to a socket. I bet you have a bunch of simlutaneous calls to channel.[Begin]Send which is why you are seeing the new TCP sockets in process explorer. Is that a correct assessment?
π
Hi kenny,
the end of this story was my bug , that make the host of the channel to re-create and re-create also the channel , combine this with the stream vs buffer , made a lot of sockets…
reparing the bug + using bufferd channel solve the problem.
thanks for your help
Hello,
MaxConnections is relevant on client side? Or it’s important on client side only if the service callback it?
Pingback: Does NetTcpBinding.MaxConnections limit the number of concurrent connections to an endpoint or� | DeveloperQuestion.com