Performance Characteristics of WCF Encoders

As part of the Framework, we ship 3 MessageEncoders (accessible through the relevate subclass of MessageEncodingBindingElement):

  1. Text – The “classic” web services encoder. Uses a Text-based (UTF-8 by default) XML encoding. This is the default encoder used by BasicHttpBinding and WsHttpBinding
  2. MTOM – An interoperable format (though less broadly supported then text) that allows for a more optimized transmission of binary blobs, as they don’t get base64 encoded.
  3. Binary –  A WCF-specific format that avoids base64 encoding your binary blobs, and also uses a dictionary-based algorithm to avoid data duplication. Binary supports “Session Encoders” that get smarter about data usage over the course of the session (through pattern recognition). This is the default encoder used by NetTcpBinding and NetNamedPipeBinding

I often get asked “which encoder is the fastest?” (and then “by how much?” :)). As always, the first principle of performance is to measure and tune your exact scenarios to determine if this is a bottleneck for you. That being said, here are some notes on the performance characteristics of our built-in Message Encoders.

Broadly speaking, encoders can impact your performance along two axis: size of encoded messages, and CPU load required to generate/consume those encoded messages.

In general, binary has the fastest encoding/decoding speed since it has less to do (usually because there is less data to read/write). This has to do with the dictionary-based optimization characteristics. The speedup is greater over TCP/NamedPipes since the encoder can recognize patterns (and negotiate optimizations) over the course of the session. If both participants are using WCF, then binary is a natural choice for production. (Note that during development, Text may be useful for debugging purposes).

Both binary and MTOM yield much faster processing of binary data (by avoiding the base64 process as well as the associated size bloat). Binary achieves this with inline binary blobs. The MTOM format achieves this through an inline base64 stub that references the binary blob outside of the Infoset. In both cases, the user model is abstracted from this detail and they will “appear” inline through the encoder.

If you do not have any binary data involved, MTOM will actually be slower than text since it had the extra overlead of packaging and processing the Message within a MIME document. However, if there is enough binary data in the document then the savings from avoiding base64 encoding can make up for this added overhead.

We spent a lot of engineering effort tuning the performance of our UTF-8 Text encoder, so you will see better performance over UTF-8 then the Unicode variations. And as to whether you should use Text or MTOM for interoperable endpoints, the guidance above should help with gut feel, but please measure your scenarios!

Leave a Reply

Your email address will not be published. Required fields are marked *