In-proc transport thoughts

Every couple of months someone asks: “does Indigo have an in-proc transport?” Which usually means “does Indigo have a way of sending messages so that it doesn’t ever leave the app-domain?” When this request is made of the performance team, Al Lee’s canned response is “why aren’t you using named pipes? Is it not fast enough?” Turns out that our named pipe transport has (so far) been fast enough for all the scenarios that have come to us. That is, in all of these cases, the transport is contributing very little to the overall cost.

All that aside, it’s still an interesting problem (and one I expect we’ll have to solve in the next version or two in order to expand our scope for “near” scenarios). The key thing to remember when writing any transport, even an in-appdomain one, is that boundaries must still be honored. That is, the Message must go through WriteMessage(XmlWriter) and be reconsituted through ReadMessage(XmlReader). As Yasser illustrated in his layered “protocol” channel, a sanctioned extensibility point is wrapping the Message and then performing your actions at Write or Read time. Trying to preempt this step has the same pitfalls as performing a shallow copy where you really need a deep copy: it just doesn’t work!

I do plan on mocking up an in-appdomain transport in the next few weeks. I will of course share the results here. For those interested, my plan of attack would be to first write a basic transport using binary for the encoding and a shared queue for “transport”. Future iterations would optimize both the queue and the format (since we don’t need to serialize strings or other immutable objects for example).

3 thoughts on “In-proc transport thoughts

  1. Matthew Kane

    Whether it’s in-proc or just a case in which you control both ends, there’s a very typical case that I think can be highly optimized inside the channel. This is the case in which both ends have a copy of the message schema (and don’t they have to) and both ends are representing those objects in memory with POO (plain old objects), usually code-generated from the schema.

    Now, at every point in the serialization and deserialization process we know exactly which nodes in the XML infoset could possibly appear and what type they are, and in many cases there is only one possibility. All we need in the stream at that point is possibly a byte to tell you which possibility is next and the binary serialization of the data. We never need to serialize or deserialize the name of any node.

    It seems to me that the code generator that generates the POO can also generate the POO serialization and deserialization code for this highly optimized binary stream.

    I would suspect that reducing the amount of data transmitted and the conversions to and from strings will do more to improve performance than the difference between a named pipe and an in-proc message queue, and this optimization isn’t limited to in-proc.

    Reply
  2. Kenny

    Indeed, that was my thought for the “optimizing the format” aspect. You can gain transformational throughput increases with such an approach. For example, a simple prototype that smuggles string objects instead of serialzing/deserializing yields a 2-3x improvement. Of course, that particular improvement is limited to in-proc.

    I will also note though that our binary format uses a dictionary approach for optimizing out node names, and if you use the XmlDictionaryReader/XmlDictionaryWriter interfaces that we expose then you can take full advantage of that optimization.

    Reply
  3. Matthew Kane

    Ah, now that I’ve disassembled the code I see what I haven’t seen documented anywhere. When you use the DataContract attribute, under the hood it creates a dictionary giving each node an integer ID and only the integer ID is passed on the wire. I’m having trouble following the code where it writes the value, but I’m guessing that the binary writer is writing numeric values as numeric values and not strings.

    Reply

Leave a Reply to Matthew Kane Cancel reply

Your email address will not be published. Required fields are marked *