That is, Program A encodes and writes some data
which is then read and decoded by Program B.
Both the format and transmission method are variable:
we could use a JSON message, a YAML
Libraries are available to perform both encoding and decoding,
but they assume that the data involved uses only the data types
that the formats directly support.
Transit augments these libraries,
adding extensible (tag-based) support for type information.
It also adds features for performance (eg, binary encoding, caching)
and programmer convenience (eg, human-readable JSON format).
Data type mapping is performed by collections of read and write "handlers":
Using introspection, the Write Library determines the original data type
and invokes the appropriate handler.
Other handlers may be invoked (recursively)
until only Transit's base data types remain.
The Read Library operates in a similar manner,
using tags to determine the encoded data type.
If no appropriate Read Handler is available,
Transit simply passes along the tagged, encoded data.
This allows the reading application to deal with the data
in a minimal manner (eg, copying it to an output stream),
even if it cannot "understand" the data type(s) involved.
This could be useful, for example, in message routing.
Because handlers only have to deal with single data elements,
they tend to be small and simple (hence, easy to write and test).
So, adding new data types to Transit is much easier than it would be
for (say) raw JSON or YAML.
See the main Transit
for detailed information on architecture, data flow, etc.