Monthly Archives: September 2005

WCF: Efficient Buffer Management

In order for a Transport to read and write Messages, it may buffer the serialized message. At the very least, the Headers need to be buffered. And in many cases (such as to avoid head of the line blocking issues), buffering the entire message is necessary.

While the CLR’s garbage collector does a really good job of recycling memory, there are still costs involved with each allocation. These costs include zeroing out the buffer, as well as the GC churn. As our performance dev says “no work is better than some work”. A bunch of Indigo R & D has gone into creating an efficient way to pool buffers given two knobs:

  • MaxBufferPoolSize – the total number of bytes to pool
  • MaxItemSize – maximum size of an item stored in the pool

With proper tuning, large (>10%) increases in performance can be witnessed. This functionality is exposed through System.ServiceModel.Channels.BufferManager. If you are writing a custom channel, you can create a BufferManager through the static method:

BufferManager.CreateBufferManager(long maxBufferPoolSize, int maxSize)

Then instead of calling “new byte[n];“, you can call “bufferManager.TakeBuffer(n);“. Just remember to call bufferManager.ReturnBuffer(buffer) (in a finally block if necessary) when you are finished. Lastly, you can call bufferManager.Clear(); if you ever want to flush the cache (e.g. after a certain amount of idle time).

On the built-in WCF Bindings and Transport Binding Elements, we expose MaxBufferPoolSize as a property for you to control our cache footprint. If you are sending small (< 64K) messages, then the default value of 512K is likely acceptable. For larger messages, it's often best to avoid pooling altogether, which you can accomplish by setting MaxBufferPoolSize=0. You should of course profile your code under different values to determine the settings that will be optimal for your application.

Back From PDC: Have some code!

While others were able to get a number of blog entries together, I spent the majority of my time at the conference either talking to customers, or in final preparation for my talk on Channel Extensibility.

Messaging over RFC 1149Yasser and I went through the basics of extending the channel layer to write a custom transport and a custom layered channel. I walked through writing a custom TCP-based transport channel. I then adapted that channel to interop with WSE 3.0 Beta. I’ve posted the code (along with a brief README) here. As time permits I’ll walkthrough the important pieces in future posts.

Yasser covered writing a custom layered channel (also called a “protocol channel”). He wrote a “chunking channel” that allowed you to fragment a Message into a number of smaller messages (the maximum size of which is controlled through a quota). These chunks would get reassembled on the receiving side, which then enables streaming scenarios over buffered transports. Other implications include that you can use WS-Security (and WS-RM) in conjunction with chunking to reliably and securely stream data over any transport. Very powerful. Code for the chunking channel to be posted in the next few days.

PDC Live and Online

A few quick notes as I’m scrambling to get my final ducks in a row for PDC 05.

If you aren’t able to attend in person, you can still keep up on all the happenings at pdcbloggers.net. They have aggregated feeds from both speakers and attendees, and it’s a good first stop in checking the online pulse of the conference.

And for those of you joining us in LA, you can check out my talk in Room 406 AB, on Thursday, September 15 from 3:45-5PM. I’ll also be roaming around the WCF Track Lounge and Hands on Labs, and at the WCF Extensiblity table during Ask the Experts. See you next week!

"Stuffing" Packets with net.tcp and net.pipe

One performance tuning knob you can use over Indigo’s TCP and Named Pipe transports is our level of batching messages. Sockets programmers may be familiar with this concept as Nagle’s algorithm.

In brief, you can make the tradeoff between latency and maximizing packet usage by setting 3 knobs:

  1. IContextChannel.AllowOutputBatching (or message.Properties.AllowOutputBatching at the Channel layer)
  2. bindingElement.MaxOutputDelay
  3. bindingElement.ConnectionBufferSize

When you set AllowOutputBatching=true, you are saying “hold on to this message in a local buffer to send out with other serialized messages if possible”. ConnectionBufferSize determines the size of the this local buffer (as well as the buffer sizes used by the underlying network objects). MaxOutputDelay specifies the maximum amount of time that we will wait for more data to package with a batched message. The default value of MaxOutputDelay is 200 milliseconds.

One last note: when using the typed programming model, these values only affect OneWay methods. For request-reply, ServiceModel will always optimize for latency and will set AllowOutputBatching = false.

Robinson Crusoe Parental Unit

Thanks to everyone for your concerns about my Dad who’s in Slidell, LA (on the north shore). He’s alive and well and performing much needed emergency work down there.

They got hit pretty hard by Katrina, but Slidell Memorial Hospital (where my Dad was working during the storm) made it through the storm still standing, and everyone inside was ok. Landlines and cell towers are still down, but I get bits of information through my step-mom who evacuated a few hundred miles away.

This is where I take a moment to share just how amazing my father is. Not only has he been patching the injured up as they arrive in droves (he’s a general surgeon), but when the generator at the hospital went down, he got out his tools and fixed that too! That brought air conditioning and hot water support to the hospital. (Dad was able to reward himself with a hot shower :)). Then, during breaks between surgery, he went back to his house (which had a tree fall on it, but miraculously didn’t take water), and fixed up the roof. I can’t imagine a more awesome father than Gary J Wolf. Dad, I love you and I’m continually amazed and proud. Keep up the great work!