Using WCF with NLB

There comes a point in time when one server is just not enough. You need to scale your service across multiple back-ends. Enter Windows Network Load Balancing (a.k.a. NLB). By using NLB, TCP connection initiations can be serviced by different machines.

This has an obvious impact on in-memory sessions. If you are using a protocol such as WS-Reliable Messaging (which will reestablish connections during the course of a session) or WS-Secure Conversation (which uses a session-negotiated security token), then you want to make sure subsequent connection requests go to the same back-end server. Similarly, when using a transport such as HTTP (which can in worst case scenarios use a new connection for each request-reply), if you are depending on an in-memory session then you will also need to ensure consecutive connection establishments arrive at the same server. Many load-balancer have an “affinity” setting that you can set to enable this behavior. Alternatively you can write a “state-less” service (from the in-memory perspective), where any app state is stored outside of your process. If you take this approach then you should avoid using WCF Sessions (which may store infrastructure state in-process).

Nicholas gave a nice overview of our Transport quotas. We have a few knobs on the TcpTransportBindingElement specifically targeted for NLB-type scenarios. They are associated with our client-side connection pooling. Nicholas highlights the final object model for these quotas, and I’ll go into a little detail about how these quotas will effect your use of NLB with our TCP transport.

  • IdleTimeout: Controls the amount of time that a TCP connection can remain idle in our connection pool. This is useful for scenarios where you don’t mind connections being reused when you are under load, but when the load dissipates you wish to reclaim your connection. The default value of IdleTimeout is 2 minutes.
  • LeaseTimeout: Controls the overall lifetime of a TCP connection. The lower you set this value, the more likely you will be re-load balanced when you create a new channel. Note that if this timeout expires we won’t just fault an existing connection. We will however close that connection when you Close() the active channel. This setting works well in conjunction with IdleTimeout. For example, if you are cycling through channels, and you are never really “idle”, you can still ensure periodic connection recycling through LeaseTimeout. The default value of LeaseTimeout is 5 minutes.

For HTTP we inherit our connection pooling settings from System.Net, so you can tweak their idle settings in order to control connection recycling frequency over HTTP.

11 thoughts on “Using WCF with NLB

  1. Sajay

    Where can i find information regarding different bindings and their behaviors in load balanced networks.
    Paritcularly when sessions(security context etc) are established.

    Reply
  2. Pingback: Sajay Antony : Load Balancing WCF

  3. Francisco Javier Banos Lemoine

    Hey Kenny. I think the LeaseTimeout default value is five minutes instead of two minutes. I just checked it out with a ConsoleHost I wrote to test NLB. A small contribution to your excellent article. Regards.

    Reply
  4. Kenny

    Yes, I mistyped in the post, it’s absolutely 5 minutes for LeaseTimeout by default. Thanks for the correction!

    Reply
  5. David Harrington

    Thanks Kenny,

    We are looking at using a CISCO Netscaler to load balance our net.tcp hosted WCF services. You pointed me in the right direction.

    Reply
  6. Pingback: Load Balancing WCF – basicHttpBinding at Sajay's Weblog

  7. Victor Ivanov

    Hello Kenny,

    Can you give a hint about LeaseTimeout – it should be set a server side or client side?

    Thanks!

    Reply
  8. Sedat

    Hi Henny;

    As i understand, the TCP connections in the pool remains open and when we open a channel the open connection (the connection to the Server1) from the pool is attached to the channel so we will not be able to consume service from the next server (the Server2) in the NLB cluster.

    What happens if we use a hardware-based load ballancer like NetScaler?

    As i guess, at that time, there will be no problem like one above. Because the client pool always have the open connection to the hardware-based load ballancer. Since it is not a virtual occurence (In Windows-NLB, the IP is virtual. And there is no any phisical component to manage connections.), it can host its own pool and should create or reuse the optimal (right) connection to the load-ballanced Server.

    This is my assumption. Is it correct?

    Thank you.

    Reply
  9. Sedat

    Hello Kenny,

    I think, i got the point. The hardware-based LB keeps the track of the clients/sessions. Since NetTcp which has transport session client keeps the connection (pooled connection) open, a session by the transport channel for that client remains on the LB. Because connection is open between LB and client, LB also keeps a connection between itself and ballanced Server.

    That chain starts from the connection from the client’s pool and the client connection is kept open in client’s pool.

    So it is same with the hardware-based LBs and we should consider the above quotas.

    Please inform me if i am wrong.

    Thank you very much. And sorry for the typo with your name in previous message.

    Reply
  10. Kenny

    Correct Sedat – any TCP-based load balancers (hardware or software-based) will do “connection pinning”, where once a client has connected to the load balancer, it will route that connection to the same backend for the lifetime of the TCP connection. Sometimes you have higher-level load balancers (for example HTTP-based) that will rebalance at each “request boundary”. But in those cases you aren’t likely able to use net.tcp. 🙂

    Reply

Leave a Reply to Sajay Cancel reply

Your email address will not be published. Required fields are marked *