Memory Usage
Large number of instruments in securities stream
A large number (hundreds of thousands or more) of instruments defined in the TimeBase securities stream may take a considerable memory footprint in Aggregator (since it maintains a per-instrument field-value cache).
You can reduce this memory footprint using the two settings below.
Instruments filter
This filter reduces the set of instruments loaded into the SecurityMetadata cache used by Aggregator connectors. The provided value must be a valid QQL condition.
Example:
Aggregator.smd.instrumentFilter=Bloomberg$Symbol IS NOT NULL
(Aggregator will skip instruments that do not define the Bloomberg$Symbol field value.)
note
This instrument filter defines system-wide behavior. Each Aggregator data collection process can define its own instrument filter.
Usually, the data-collection-level instrument filter is a subset of the system-wide filter.
Fields filter
Reduces the set of fields the SecurityMetadata cache will keep for each symbol (defined as a comma-separated list). Example:
Aggregator.smd.fieldFilter=ActiveContract,Bloomberg$Symbol
Hybrid data connector buffer
When Hybrid data connectors are used, pay attention to live message buffer sizes specified via the liveMsgBufferSize parameter (default is 20000).
<liveMsgBufferSize>20000</liveMsgBufferSize>
This parameter defines how many live messages can be accumulated (per instrument).
The buffer is implemented as a circular queue (ring buffer), and it is normal for this buffer to overflow (in which case each newest live message simply overrides the oldest).
Buffer capacity should be large enough to compensate for historical API lag.
For example, if any market tick that happens now is guaranteed to be visible via the historical API within the next 10 seconds, then this buffer needs to accommodate up to 10 seconds worth of live data.
From a memory consumption point of view, be aware of the footprint of large buffers.
For example, for feeds that collect trades, each trade message is encoded in ~40–50 bytes.
If you multiply this by buffer size (20,000) and the number of instruments that may undergo gap recovery (say 2,000), this implies ~2 GB of memory just for these live message buffers.
note
In reality, many factors affect the gap recovery algorithm (market activity spikes, after-hours behavior, historical API throttling, etc.).
It may be difficult to set these parameters correctly from the beginning. Remember that Aggregator will keep retrying if a one-time gap recovery attempt fails.
Set these parameters conservatively and tune them up.