A collection of hands-on instructors and workflows
Tags: development

Replication Between Two TimeBase Instances

This instruction describes how to live replicate data from one TimeBase instance to another via TimeBase Shell in Docker.

For this example we will use three components which are started one by one in the following order:

  1. TimeBase Master - source instance
  2. TimeBase Slave - target instance
  3. Replication - data replication script

Docker Compose

Run Docker Compose with the configurations for each of the components. Mounted volumes of each component override common configurations in case other than common values are supplied.

UAC files uac-file-security.xml and uac-access-rules.xml are common for TimeBase Master and Slave components.

version: '3.5'
services:
  timebase-master: //TimeBase Master Settings
    image: "./timebase/server:5.4.21" //path to TimeBase Docker image you are going to use
    container_name: timebaseMaster
    environment:
      - JAVA_OPTS=
        -Xms8g
        -Xmx8g
        -DTimeBase.version=5.0
    ports:
      - 8111:8011 //TimeBase Master Server port
#     - 8122:8022
    deploy:
      resources:
        limits:
          memory: 10G
        reservations:
          cpus: '1'
          memory: 4G
    stop_grace_period: 5m
    ulimits:
      nofile:
        soft: 65536
        hard: 65536
    entrypoint: "/bin/sh -c"
    command: ["/timebase-server/bin/tdbserver.sh -tb -home /master-tb-home"]
    volumes:
        - "./...inst.properties:ro"//path to file
        - "./...admin.properties:ro"//path to file
        - "./...test.jks:ro"//path to file
        - "./...uac-access-rules.xml:ro"//path to file
        - "./...uac-file-security.xml:ro"//path to file
        - "./...timebase-home"//path to file


  timebase-slave: //TimeBase Slave Settings
    image: "./timebase/server:5.4.21" //path to TimeBase Docker image you are going to use
    container_name: timebaseSlave
    environment:
      - JAVA_OPTS=
        -Xms8g
        -Xmx8g
        -DTimeBase.version=5.0
    ports:
      - 8211:8011 //TimeBase Slave Server port
#     - 8122:8022
    deploy:
      resources:
        limits:
          memory: 10G
        reservations:
          cpus: '1'
          memory: 4G
    stop_grace_period: 5m
    ulimits:
      nofile:
        soft: 65536
        hard: 65536        
    entrypoint: "/bin/sh -c"
    command: ["/timebase-server/bin/tdbserver.sh -tb -home /slave-tb-home"]
    volumes:
        - "./...inst.properties:ro"//path to file
        - "./...admin.properties:ro"//path to file
        - "./...test.jks:ro"//path to file
        - "./...uac-access-rules.xml:ro"//path to file
        - "./...uac-file-security.xml:ro"//path to file
        - "./...timebase-home"//path to file

  replication: //Replication Settings
    image: "./.../server:5.4.21" //path to the TimBase Docker image you are going to use
    tty: true // must be set to True
    container_name: replication
    environment:
    - JAVA_OPTS=
      -Xms8g
      -Xmx8g
      -DTimeBase.version=5.0
    depends_on:
      - timebase-master
      - aggregator
      - timebase-slave
    ports:
      - 8311:8011
#     - 8122:8022
    deploy:
      resources:
        limits:
          memory: 10G
        reservations:
          cpus: '1'
          memory: 1G
    stop_grace_period: 5m
    ulimits:
      nofile:
        soft: 65536
        hard: 65536
    entrypoint: "/bin/sh -c"        
    command: ["/replication/start-replication.sh"]
    volumes:
        - "./.../replication/start-replication.sh" //script that start TimeBase Shell
        - "./.../replication/replication.script" //script that starts data replication

TimeBase Master Configurations

1.Custom admin.properties parameters:

TimeBase.host=timebase-master //TimeBase Master host name

2.Custom inst.properties parameters:

serial=XXXX //license serial number

TimeBase Slave Configurations

1.Custom admin.properties parameters:

TimeBase.host=timebase-slave //TimeBase Slave host name

2.Custom inst.properties parameters:

serial=XXXX //license serial number

Replication Configurations

1.Create and mount start-replication.sh script to start using TimeBase Shell. See Guide for your reference.

Example

#!/bin/sh

/timebase-server/bin/tickdb.sh -exec exec /replication/replication.script

2.Create and mount replication.script script to run live replication via TimeBase Shell. See Guide for your reference. See docker-compose.yaml example earlier in this section.

Example

set db dxtick://username:password@timebase-slave:8011 //Username and password from UAC settings
open
set srcdb dxtick://username:password@timebase-master:8011 //Username and password from UAC settings
set cpmode live
set reload prohibit
set srcstream <stream_name>
replicate <stream_name>

User Access Control


Enable SSL

To authenticate a client on a server and to encrypt the transmitted data, SSL uses the concept of private and public keys. Private keys are stored on the server and must not be shared with anybody. Public keys may be distributed without limitations among all clients to connect to the TimeBase Server Keystores are used to store and access private and/or public keys. Tomcat supports JKS, PKCS11 and PKCS12 keystore formats. JKS (Java KeyStore) is Java’s standard keystore format, that can be created and edited with Java keytool. PKCS11 and PKCS12 are Internet standard keystore formats, that can be edited with OpenSSL and Microsoft Key-Manager.

The most common ways of using keystores in TimeBase:

Both Tomcat server and clients use same file localhost.jks with private key and public key (self-signed certificate).

You can find localhost.jks on subversion in cert folder.

Tomcat server and clients use own keystores with the same public key (self-signed certificate).

Tomcat server uses authorized (by certification center) certification authority (CA) certificate. Clients use default Java cacert keystore with the most common CA certificates. Users prefer this particular way.

Setup SSL for TimeBase Server

  1. Create/export keystore file. Supported formats: JKS, PKCS11 and PKCS12.
      "%JAVA_HOME%\bin\keytool.exe" -genkey -selfcert -alias deltix -keystore "keystore.jks" -storepass deltix
    
  2. Change admin.properties file as defined below:
      TimeBase.enableSSL=true
      TimeBase.sslInfo=QSKeystoreInfo
      TimeBase.sslInfo.keystoreFile=<path to keystore file>
      TimeBase.sslInfo.keystorePass=<keystore file password>
      TimeBase.sslInfo.sslForLoopback=false
      TimeBase.sslInfo.sslPort=8022
    

Setup SSL for Client TimeBase Connections

This system properties should be defined for the client process if self-signed certificate is used:

  1. deltix.ssl.clientKeyStore=<path to keystore file>
  2. deltix.ssl.clientKeyStorePass=<keystore file password>

Creating Keystores

  1. To use SSL for testing or for internal usage, get it from subversion cert/localhost.jks
  2. To use self-signed certificate, follow the next steps:
    2.1. Generate server keystore server.jks with private key:
    $ keytool -genkey -alias server_keystore -keystore server.jks
    Output:
    Enter keystore password: foobar
    What is your first and last name?
    [Unknown]: localhost (or your domain name
    What is the name of your organizational unit?
    [Unknown]:
    What is the name of your organization?
    [Unknown]:
    What is the name of your City or Locality?
    [Unknown]:
    What is the name of your State or Province?
    [Unknown]:
    What is the two-letter country code for this unit?
    [Unknown]: US
    Is CN=localhost, OU=, O=, L=, ST=, C=US correct?
    [no]: yes
    Enter key password for <server_keystore>
    (RETURN if same as keystore password): foobar
    Important: In What is your first and last name? specify your domain name for which you want to create certificate (if your server is loopback, just specify localhost)
    2.2. Generate server.jks public key server.cert:
    $ keytool -export -alias server_keystore -file server.cert -keystore server.jks
    2.3. Create client keystore client.jks:
    $ keytool -genkey -alias client_keystore -keystore client.jks
    2.4. Add server.cert to client.jks:
    $ keytool -import -alias client_keystore -file server.cert -keystore client.jks 2.5. You have server.jks and client.jks with the same server.cert. Now you can specify server.jks in QSArchitect and client.jks in deltix.ssl.clientKeyStore.
  3. User can generate and use certificate officially singed by Certification Center.
    In this case clients (also Web browsers) skip keystores.
    To create such certificate:
    3.1. Repeat step 2.1
    3.2. Generate Certificate Signing Request (CSR), which needs to get authorized certificate from certification center:
    $ keytool -certreq -keyalg RSA -alias server_keystore -file server.csr -keystore server.jks
    3.3. Send CSR (server.csr) to certification centers like Verisign or Trust Center and get an authorized certificate and root certificate of the certificate center
    3.4. Add root certificate to server.jks:
    $ keytool -import -alias root -keystore server.jks -trustcacerts -file <filename_of_the_root_certificate>
    3.5. Add the authorized certificate to server.jks:
    $ keytool -import -alias server_keystore -keystore server.jks -file <your_certificate_filename>
    3.6. Now you can specify server.jks in QSArchitect. Client will use cacert automatically to configure SSL.

SSL Termination

There are two way to implement HTTPS connection termination for TimeBase:

  • Put a load balancer with SSL termination (also known as SSL offloading) in front of the TimeBase instance. In this scenario, DNS lookups must resolve the load balancer. The exact configuration depends on the load balancer you use.
  • If you run TimeBase on AWS, use AWS deployment guide to configure the Application Load Balancer (ALB).

Define TimeBase.network.VSClient.sslTermination=true TimeBase system property to configure the TimeBase Server to process HTTP connections from the Load Balancer.


Repair TimeBase

This is instructions for Administrator and DevOps on how to repair TimeBase database.

Repair Streams in 5.X Data Format

TimeBase streams written in new 5.X format located in QuantServerHome/timebase directory.

One stream per directory, where directory name corresponds to escaped stream key.

Real life example:

/deltix/QuantServer/bin/tbrecover.sh -home /deltix/QuantServerHome

Deleting index files ...

/deltix/QuantServerHome/tickdb/__a_xa_95___o_utput_95___t_ags_95___r_aw_95_2_95_3_95_4/data/index.dat deleted.
/deltix/QuantServerHome/tickdb/__a_xa_95___o_utput_95___e_xec__[…]m_arket_95___r_aw_95_2_95_3_95_4/data/index.dat deleted.
/deltix/QuantServerHome/tickdb/__a_xa_95___o_utput_95___vwap__95___r_aw_95_2_95_3_95_4/data/index.dat deleted.
/deltix/QuantServerHome/tickdb/__a_xa_95___o_utput_95___aep__95___r_aw_95_2_95_3_95_4/data/index.dat deleted.
/deltix/QuantServerHome/tickdb/__a_xa_95___o_utput_95___qwap__95___r_aw_95_2_95_3_95_4/data/index.dat deleted.

TimeBase will self-repair streams on startup in most cases. For harder cases there is a tdbrecover utility:

/bin/tbrecover.sh -home <path to QSHome>

Utility will drop all indexes and force TimeBase Server rebuild them on next start - see the attached example for reference.

Repair Streams in 4.X Data Format

The following real-life example shows repair session:

==> open /deltix/QuantServerHome/tickdb
    TimeBase '/deltix/QuantServerHome/tickdb' is opened to be checked/repaired.
==> set ri true
rebuildIndexes: true
==> set ar true
autoResolve: true
==> scan full
Full scanning...
    Verify database /deltix/QuantServerHome/tickdb
1 May 13:25:36.672 INFO [RepAppShell#Worker] Start scan folders.
    Verify the structure of durable stream calendar2020
    Verify the structure of durable stream NYMEX
    Verify the structure of durable stream version 5.0 Axa_Output_Tags_Raw_2_3_4
1 May 13:25:37.399 INFO [RepAppShell#Worker] Found 12 streams.
1 May 13:25:37.399 INFO [RepAppShell#Worker] Scan finished.

TimeBase streams written in new 4.X format located in QuantServerHome/tickdb directory.

One stream per directory, where directory name corresponds to escaped stream key.

TimeBase will self-repair streams on startup in most cases. For harder cases there is a tbrepshell utility:

/deltix/QuantServer/bin/tbrepshell.sh

Use help command at any time to learn repair shop features:

==> help


Topic Tuning

  • Dedicated CPU core for each low-latency thread
    • Isolate CPU cores on OS level
    • Provide these cores to JVM using “taskset” Linux command
    • Set CPU core affinity for the low-latency Java threads from Java program
  • Avoid allocations on main path, minimize GC activity
  • Set reasonable Guarantied Safepoint Interval (depends on app)

Backup Stream Data


> Step 1

set stream one_hour
desc
DURABLE STREAM "one_hour" (
    CLASS "deltix.timebase.api.messages.DacMessage" 'Data Access Control Message' (
        STATIC "entitlementId" 'Entitlement ID' BINARY = NULL 
    );
    CLASS "deltix.timebase.api.messages.MarketMessage" 'Market Message' UNDER "deltix.timebase.api.messages.DacMessage" (
        "currencyCode" 'Currency Code' INTEGER SIGNED (16),
        "originalTimestamp" 'Original Timestamp' TIMESTAMP,
        "sequenceNumber" 'Sequence Number' INTEGER,
        "sourceId" 'Source Id' VARCHAR ALPHANUMERIC (10)
    )
        COMMENT 'Most financial market-related messages subclass this abstract class.';
    CLASS "deltix.wct.marketdata.historical.WCTBarMessage" UNDER "deltix.timebase.api.messages.MarketMessage" (
        "closeAsk" FLOAT DECIMAL64,
        "closeAskQuoteId" BINARY,
        "closeBid" FLOAT DECIMAL64,
        "closeBidQuoteId" BINARY,
        "closeTimestamp" TIMESTAMP,
        "exchangeCode" VARCHAR ALPHANUMERIC (10),
        "highAsk" FLOAT DECIMAL64,
        "highBid" FLOAT DECIMAL64,
        "highMid" FLOAT DECIMAL64,
        "isShadow" BOOLEAN NOT NULL,
        "lowAsk" FLOAT DECIMAL64,
        "lowBid" FLOAT DECIMAL64,
        "lowMid" FLOAT DECIMAL64,
        "openAsk" FLOAT DECIMAL64,
        "openBid" FLOAT DECIMAL64,
        "openTimestamp" TIMESTAMP,
        "volume" FLOAT DECIMAL64
    );
)

> Step 2

export /var/lib/market-data-node/one_hour.qsmsg

This document explains how to use TimeBase Shell tool to backup and restore a stream using command line.

TimeBase Shell

  • Run command /timebase-server/bin/ticksb.sh
  • To connect to TimeBase execute the following shell commands:

set db dxtick://localhost:8011
open

Step 1. Record stream format (also use to re-create stream) - see code sample

Step 2: Copy stream data into local file - see code sample


Restore Stream Data


> Example

create DURABLE STREAM "one_hour_restored" (
    CLASS "deltix.timebase.api.messages.DacMessage" 'Data Access Control Message' (
        STATIC "entitlementId" 'Entitlement ID' BINARY = NULL
    ) NOT INSTANTIABLE;
    CLASS "deltix.timebase.api.messages.MarketMessage" 'Market Message' UNDER "deltix.timebase.api.messages.DacMessage" (
        "currencyCode" 'Currency Code' INTEGER SIGNED (16),
        "originalTimestamp" 'Original Timestamp' TIMESTAMP,
        "sequenceNumber" 'Sequence Number' INTEGER,
        "sourceId" 'Source Id' VARCHAR ALPHANUMERIC (10)
    ) NOT INSTANTIABLE
        COMMENT 'Most financial market-related messages subclass this abstract class.';
    CLASS "deltix.wct.marketdata.historical.WCTBarMessage" UNDER "deltix.timebase.api.messages.MarketMessage" (
        "closeAsk" FLOAT DECIMAL64,
        "closeAskQuoteId" BINARY,
        "closeBid" FLOAT DECIMAL64,
        "closeBidQuoteId" BINARY,
        "closeTimestamp" TIMESTAMP,
        "exchangeCode" VARCHAR ALPHANUMERIC (10),
        "highAsk" FLOAT DECIMAL64,
        "highBid" FLOAT DECIMAL64,
        "highMid" FLOAT DECIMAL64,
        "isShadow" BOOLEAN NOT NULL,
        "lowAsk" FLOAT DECIMAL64,
        "lowBid" FLOAT DECIMAL64,
        "lowMid" FLOAT DECIMAL64,
        "openAsk" FLOAT DECIMAL64,
        "openBid" FLOAT DECIMAL64,
        "openTimestamp" TIMESTAMP,
        "volume" FLOAT DECIMAL64
    );
)
OPTIONS (LOCATION = '/data'; FIXEDTYPE; PERIODICITY = 'IRREGULAR'; HIGHAVAILABILITY = FALSE)
/

Step 1: Re-Create Stream

TimeBase Shell uses the following syntax.

Simply add create in the beginning and newline/slash at the end :

create <QQL>
/

Step 2: Load Data From File

Run the following TimeBase Shell commands:

set stream one_hour_restored
set src /var/lib/market-data-node/one_hour.qsmsg
import

Data Sizing

This technical note provides statistics about TimeBase disk space utilization.

RECOMMENDATIONS

  • Use 5.X storage format (compression results in 8-10x space savings compared to 4.X).
  • If you must use 4.X format: consider using MAX distribution factor for streams containing small number of contracts (e.g. less than 500). For 8x space savings use OS-level compression to store large volumes of TimeBase data. TimeBase allows placing individual streams on dedicated volumes.
  • Make sure TimeBase stream schema defines all unused fields as static (rather than fill each with NULL on each message).
  • In some cases, you may want to use Classic Level1/Level 2 market data format compared to newer Universal Market Data Format.
Market Data Type 5.x Storage Format 4.x Storage Format
Level 1 (BBO+Trades) 6.8 Mb/1 million messages 32 Mb/1 million messages
Level 2 (Market by Level) - 10 levels 12.7 Mb/1 million messages 90 Mb / 1 million messages

5.X Format Storage

LEVEL 1 MARKET DATA

Best-Bid-Offer and Trades data sample:

  • CME (NYMEX) Crude Oil (CL) and Natural Gas (NG) FUTURE contracts.
   
Sample time range: 6 market days (4/23/2020-5/1/2020)
Number of Level 1 messages stored: 26,896,346
Size on disk: 184,147,968 bytes
6.84 Mb per million of L1 messages  

LEVEL 2 MARKET DATA

Market-by-Level and Trades data sample:

  • CME (NYMEX) Crude Oil (CL) and Natural Gas (NG) FUTURE contracts.
  • Market depth: 10 levels
   
Sample time range: 30 market days (April 2020)
Number of Level 2 messages stored: 459 million
Size on disk: 5.57 Gb
12.75 Mb per million of L2 messages or 13 bytes per message  

4.X Format Storage

LEVEL 1 MARKET DATA

Best-Bid-Offer and Trades data sample:

  • CME (NYMEX) Crude Oil (CL) and Natural Gas (NG) FUTURE contracts.
   
Sample time range: 6 market days (4/23/2020-5/1/2020)
Number of Level 1 messages stored: 25,873,909
Size on disk: 807 Mb
32 Mb per million of L1 messages  

LEVEL 2 MARKET DATA

Market-by-Level and Trades data sample:

  • CME (NYMEX) Crude Oil (CL) and Natural Gas (NG) FUTURE contracts.
  • Market depth: 10 levels
   
Sample time range: 6 market days (4/23/2020-5/1/2020)
Number of Level 2 messages stored: 43,617,735
Size on disk: 4,046,630,912 bytes/541,920,890 bytes (gzip size)
90 Mb per million of L2 messages or 93 bytes per message  

4.3 Classic Message Format

LEVEL 2 MARKET DATA

  • FX market from 10 LPs for 27 currency pairs
  • Market depth: 10 levels
   
Sample time range: 1 market day (10/21/2014)
Number of Level 2 messages stored: 834,503,715
Size on disk: 85,523,931,136 bytes
100 Mb per million of L2 messages or 102 bytes per message  

Annexes

LEVEL2 STREAM SAMPLES

Snapshot message sample:

Incremental update message:


Data Partitioning

TimeBase provides an ability to distribute data in the single stream to the different locations, so-called spaces.

TimeBase Spaces are like partitions or shards in other systems.

Each stream Space is a separate disk location, which can store data for independent set of symbols (instruments). At the same time stream holds consolidated meta information from all spaces.

For example, when two independent message producers write data into the same TimeBase stream it is convenient to assign unique set of symbols for each them. Each producer will write data into separate space. Consumer can read all data from both spaces or a single space.

API


> Example 1

LoadingOption options = new LoadingOptions();
options.space = ‘”partition1;
TickLoader loader = db.createLoader(options)
loader.send(message)

> Example 2

SelectionOptions options = new SelectionOptions()
options.space=partition1
TickCursor cursor = stream.select(time, options)

> Example 3

deltix.qsrv.hf.tickdb.pub.TickStream.listSpaces()
deltix.qsrv.hf.tickdb.pub.TickStream.listEntities(java.lang.String)
deltix.qsrv.hf.tickdb.pub.TickStream.getTimeRange(java.lang.String)

Spaces use textual identifiers. To create a space it is necessary to start writing into it.

By default, all data is written into NULL space.

  • Define String value for the deltix.qsrv.hf.tickdb.pub.LoadingOptions.space attribute.
  • Create deltix.qsrv.hf.tickdb.pub.TickLoader using this options - see attached code sample 1

To query data containing in the specific space use: deltix.qsrv.hf.tickdb.pub.SelectionOptions.space to select - see attached code sample 2

To query stream meta-information about spaces, use provided methods - see attached code sample 3

Use Cases

  • Classic use case – several producers write data in parallel using unique set of the symbols to increase write performance.
  • Parallel data processing – several producers write data into stream using independent time ranges
  • Partition data for the different storage

Restrictions

  • Number of spaces should be limited to 10-15 to gain optimal performance result while querying whole data stream.
  • Consider giving more heap memory to TimeBase Server having defined spaces, because each consumer reading whole stream will require additional 8-10M of memory per space.

Copying Tick Database Streams

This document describes how to copy individual streams from one tick database to another.

As discussed in the TimeBase Structure documentation, each stream is a self-contained repository of data. This fact makes the copying process very simple. All that is required to copy a stream is to copy the stream folder from one tick database to another.

Steps:

  1. Stop both instances of the TimeBase services that you will be copying the stream(s) from and to.
  2. Copy stream(s) to the new location.
  3. Restart services.

Distributed Tick Database

This document describes the process for creating a Distributed Tick Database.

As mentioned in the TimeBase Structure/Physical Layout section of the documentation, there are several reasons for wanting to distribute a tick database:

  • Spread data among multiple storage devices (not connected into a disk array) in order to increase the amount of data that can be stored.
  • Spread data among multiple storage devices (not connected into a disk array) in order to increase the performance of data retrieval.
  • Overcome the inefficiencies of specific file systems related to a large number of files being placed in a single folder. For example, it is undesirable to place more than a few thousand files into a single NTFS folder. NTFS performance begins to fall exponentially after folder size exceeds about 4000 files. In this case, using multiple folders even on the same physical device is beneficial.

A round robin approach is used in selecting storage locations when a new stream is created.

The overall approach used in this process is:

  • Create the default tick database.
  • Create the distributed folders outside of the default configuration.
  • Create the tick database catalog file, dbcat.txt.

Streams will be stored using the first folder location defined within the catalog file, then rotate through the remaining folders listed, and end with the parent tick database folder that is part of the home directory being managed.

Steps: Must Be Edited

  1. Launch QuantServer Architect and create a new QuantServerHome folder to manage. This example assumes you are working with the default QuantServerHome location which is C:\Program Files\Deltix\QuantServerHome.
  2. Configure TimeBase service and apply changes. This process will generate the default tick database folder within the QuantServerHome directory.
  3. Stop the TimeBase service if it is running. This will be required if the service was configured as AUTO.
  4. Create the distributed folders.
  5. Navigate to the tickdb folder within the QuantServerHome directory that is being managed.
  6. Create the dbcat.txt catalog file within the tickdb folder.
  7. Modify the file and add the absolute paths to the distributed folders. There are no restrictions on folder names. For example:
    D:\tickdb1
    E:\myDistributedTickdb
  8. Restart TimeBase service and launch TimeBase Administrator.

As you populate your database with new streams, you will notice the first stream will be created within your D:\tickdb1 folder, the second within your E:\myDistributedTickdb folder, and the third within your parent QuantServerHome\tickdb folder.


Migrate to 5.X Format

This instruction explains how to migrate TimeBase database to 5.X storage format.

Prior to release 5.4 TimeBase used 4.3 (classic) storage format.

Classic format has been battle tested for 10 years. However, extensive testing (including a 1+ year of production use) conducted on new format give us confidence to recommend newer storage format for all TimeBase clients.

The key advantages of the 5.X format are:

  • 5-10 times less space required to store data due to built-in SNAPPY compression.
  • More efficient support of streams with large number of instruments.
  • Ability to edit historical data (insert/modify/delete).

We offer special utility that converts some or all streams in existing TimeBase database into new format.

Getting Ready

  • Stop TimeBase and all dependent processes.
  • Backup your streams. Utility preserves a copy of original data under QuantServerHome/tickdb/<streamKey> directory. However, for safety precautions it is recommended to create an additional backup for critical production sites.
  • Make sure you have DELTIX_HOME environment variable set to your TimeBase/QuantServer installation. This variable is already defined if you use Docker image.

export DELTIX_HOME=/deltix/QuantServer

TBMIGRATOR Utility

> Shell CLI Migration Commands

-home <qshome>  Path to QuantServer Home. Required.

-streams <stream1,..,streamN>
                Comma-separated list of streams. Optional.
                If omitted then all streams will be migrated.

-streamsRegexp <streamsRegexp>
                Regular expression to lookup streams for migration.
                Optional. If defined -streams arg will be ignored.

-compare        If defined streams will be compared after migration.

-transient      If defined transient streams will be migrated (created).

> Example of converting NYMEX and NYMEX_L1 streams:

$ /deltix/QuantServer/bin/tbmigrator.sh -home /deltix/QuantServerHome -streams NYMEX,NYMEX_L1
May 01, 2020 10:05:38 PM deltix.qsrv.hf.tickdb.tool.TBMigrator$TBStreamMigrator migrate
INFO: Migrate streams from "/deltix/QuantServerHome/tickdb" to "/deltix/QuantServerHome/timebase"
May 01, 2020 10:05:38 PM deltix.ramdisk.RAMDisk createCached
INFO: Initializing RAMDisk. Data cache size = 100MB.
1 May 22:05:39.061 INFO [main] [Axa_Output_AEP_Raw_2_3_4] open: checking data consistency...
1 May 22:05:39.105 INFO [main] [Axa_Output_ExecByMarket_Raw_2_3_4] open: checking data consistency...
1 May 22:05:39.111 INFO [main] [Axa_Output_QWAP_Raw_2_3_4] open: checking data consistency...
1 May 22:05:39.131 INFO [main] [Axa_Output_Tags_Raw_2_3_4] open: checking data consistency...
1 May 22:05:39.137 INFO [main] [Axa_Output_VWAP_Raw_2_3_4] open: checking data consistency...
May 01, 2020 10:05:39 PM deltix.ramdisk.RAMDisk createCached
INFO: Initializing RAMDisk. Data cache size = 100MB.
May 01, 2020 10:05:39 PM deltix.qsrv.hf.tickdb.tool.TBMigrator$TBStreamMigrator migrate
INFO: Start [NYMEX] stream migration...
May 01, 2020 10:06:24 PM deltix.qsrv.hf.tickdb.tool.TBMigrator$TBStreamMigrator migrate
INFO: Start [NYMEX_L1] stream migration...
1 May 22:06:24.709 INFO [main] Replication completed.
1 May 22:06:47.245 INFO [main] Replication completed.
1 May 22:06:47.359 INFO [main] Writers successfully stopped, files in queue = 0
1 May 22:06:47.360 INFO [main] Writers successfully stopped, files in queue = 0
May 01, 2020 10:06:47 PM deltix.qsrv.hf.tickdb.tool.TBMigrator$TBStreamMigrator migrate
INFO: Streams migration successfully finished.

All command line tools provided with TimeBase support -help argument.

You can covert all streams (default) or limit the tool to specific streams using -streams or -streamsRegexp arguments - see attached example.

Streams in 5.X format are located in QuantServerHome/timebase.

Let’s compare size of directories in format 4.X /tickdb and new 5.X /timebase format:

[centos@ip-10-0-0-54 QuantServerHome]$ du tickdb/__nymex_ -sh
3.9G    tickdb/__nymex_
[centos@ip-10-0-0-54 QuantServerHome]$ du timebase/__nymex_ -sh
586M    timebase/__nymex_

Converted to 5.X stream takes 6.65x times less space!


Receive a Copy of Last Known Values

TimeBase provides an ability to receive a copy of last known values for new stream subscribers.

Streams that use this feature are called unique streams.

Use Case Example

Consider a case when somebody publishes portfolio positions into TimeBase.

Every time a position in some instrument changes, we send a new message into TimeBase.

For rarely traded instruments there may be days or months between changes.

If we want to know positions in all instruments it may be very demanding to process entire stream backwards until we see the last message in each instrument.

Unique streams values solve this problem by keeping a cache of last position for each instrument in TimeBase server cache.

Any new subscriber will receive a snapshot of all known positions immediately before any live messages.

Message Identity

By default, messages are identified and cached using a Symbol and Instrument Type.

However, in some cases unique streams have extended identity.

You may extend the default message identity with one or more additional custom fields.

Example

For keeping per-account positions in example above, message identity may include custom field Account.

  • Additional fields should be annotated with @PrimaryKey (from Deltix.timebase.api package).
  • For Luminary stream schemas use [PrimaryKey] decorator (from Deltix.Timebase.API namespace).

Producer API

Special action or API changes are not required to produce data into a Unique stream.

Consumer API

Special action is not required from a consumer to subscribe to Unique stream.

Just be aware that initial set of messages you receive may not be live data but rather a snapshot of last messages.

You may inspect timestamps to distinguish live data and historical snapshot.

A more advanced technique may use Real Time notification marker.


AMQP (Enterprise Edition)

AMQP makes it easy to integrate third-party applications with TimeBase.

Many programming languages have libraries that support AMQP: JavaScript, Java, .NET, Ruby, Python, PHP, Objective-C, C/C++, Erlang, and others.

TimeBase AMQP support relies on Proton-J engine and supports AMQP specification version 1.0.

Proton-J is distributed under Apache Software Foundation 2.0 License.

AMQP support is available in QuantServer starting from version 4.3.31C.

Configuration

  1. Start QuantServer Architect
  2. Click Edit and select TimeBase component on the diagram
  3. Select Advanced tab in the right side Properties panel
  4. Enable AMQP checkbox and specify a unique port
  5. Click Apply, Start, and then Done

Operation

AMQP clients can be divided into two types:

  • Producers - write data into the TimeBase stream
  • Consumers - read data from the TimeBase stream

In both cases TimeBase stream acts as message destination/source.

Producers

Message produces that write into TimeBase specify stream name as destination.

Consumer

Message consumers that read from TimeBase use the following syntax to specify message source:

streamName?param1=value1&param2=value2

The syntax is somewhat similar to HTTP request syntax.

Stream is specified first, followed by several optional parameters:

  • live – this parameter controls Live and Historical data streaming mode. In live mode data never ends (consumer waits for new data to appear). In historical data streaming mode consumer completes, once all currently available data is processed (see End-of-Data section below).
    Example: live=true
    Consumer operates by default in historical mode.
  • type – this parameter controls what types of messages will be read by a cursor.
    If not specified, a cursor reads all types of messages.
    Example:
    type=deltix.timebase.api.messages.TradeMessage&
    type=deltix.timebase.api.messages.BestBidOfferMessage
  • time – this parameter sets the start time (in GMT timezone). Time format: YYYY-MM-DD hh:mm:ss.S, where:
    • YYYY– four-digit year
    • MM – two-digit month (01=January, etc.)
    • DD – two-digit day of month (01 through 31)
    • hh – two digits of hour (00 through 23) (am/pm NOT allowed)
    • mm – two digits of minute (00 through 59)
    • ss – two digits of second (00 through 59)
    • S – milliseconds (0..999)
      Example: time=2001-12-29 12:34:56.000.
  • symbol – this parameter indicates symbol of received messages.
    If not specified, then the cursor reads all symbols from messages.
    Example: symbol=AAPL&symbol=DLTX.
  • instrumentType – this parameter indicates instrumentType of received messages.
    instrumentType can be one of EQUITY, BOND, FUTURE, OPTION, FX, INDEX, ETF, CUSTOM, etc.
    Current version supports only queries that retrieve same type instruments. If this parameter is omitted, default value is EQUITY.
    Example: instrumentType=BOND.
  • format – this parameter indicates the format of TimeBase messages.
    Possible formats:
    • map
    • json (default) - Current version of AMQP bridge doesn’t support JSON for producers.
      Example: format=map.
  • heartbeat – this parameter specifies heartbeat interval (in seconds) for live data consumers. Default value is 0 (no heartbeats).

Message Format


> Example Node.js:

var message = {
"type" : "deltix.timebase.api.messages.L2Message",
	"timestamp" : "2016-08-18 17:16:58.202",
	"symbol":"JSONP",
	"instrumentType":"EQUITY",
	"exchangeCode":"NY4",
	"currencyCode":1,
	"sequenceNumber":1,
	"isImplied":true,
	"isSnapshot":true,
	"sequenceId":1,
	"actions":[
		{	
			"type" : "deltix.timebase.api.messages.Level2Action",
			"level":1,
			"isAsk":false,
			"action":"UPDATE",
			"price":9.5,
			"size":19.5,
			"numOfOrders":1
		},
		{
			"type" : "deltix.timebase.api.messages.Level2Action",
			"level":1,
			"isAsk":true,
			"action":"DELETE",
			"price":21.5,
			"size":11.5,
			"numOfOrders":1
		}
	]
};
sender.send (message);

> Best Bid Offer Message in JSON format:

{
	"symbol":"AAPL",
	"instrumentType":"BOND",
	"timestamp":"2016-07-21 18:19:13.112",
	"sequenceNumber":null,
	"bidPrice":19.0,
	"bidSize":3.0,
	"offerPrice":55.0,
	"offerSize":10.0
}

> Trade Message in JSON format:

{
	"symbol":"TestR",
	"instrumentType":"BOND",
	"timestamp":"2016-07-21 18:19:13.112",
	"sequenceNumber":null,
	"price":50.0,
	"size":3.0,
	"condition":"msg #18",
	"aggressorSide":null,
	"beginMatch":true,
	"netPriceChange":9.0,
	"eventType":null
}


> Example Java JMS:

MapMessage message = session.createMapMessage();
message.setString("type","deltix.qsrv.hf.pub.BarMessage");
message.setDouble("open", 500 ) ;
message.setString("symbol","Test");
message.setString("instrumentType","BOND");

Messages in AMQP can transmit only attribute types defined in the specification.

These include:

  • null
  • boolean
  • ubyte
  • ushort
  • uint
  • ulong
  • byte
  • short
  • int
  • long
  • float
  • double
  • decimal32
  • decimal64
  • decimal128
  • char
  • timestamp
  • uuid
  • binary
  • string
  • symbol
  • list
  • map
  • array

Please note that timestamps can be transmitted either as a string in YYYY-MM-DD hh:mm:ss.SSS format in GMT time zone, or as a number (which represents Unix epoch time).

TimeBase uses application-data part of the AMQP message to store all message content.

Application data is just like a Map where each message field is stored as { field-name, field-value } pair.

Optional message fields which are set to Null will be skipped from TimeBase messages.

End-of-Data and Heartbeats

When consumer is reading stream in historical mode a special message is sent to indicate end-of-data.

This message means that all requested messages are consumed.

End-of-data message is a message with application-data set to AmqpValue(null).

Similarly, when consumer is reading stream in live mode, the same kind of message is sent as periodic heartbeat, when all available messages are consumed and available live data.

Node.js Sample


node examples/producerBar.js
node examples/producerL2.js
node examples/consumerBar.js 
node examples/consumerLiveL2.js 
node examples/test.js

  1. Download and install Node.js
  2. Download and unzip Deltix TimeBase AMQP examples package
  3. Run command shell from the root of uncompressed folder and execute command: npm install
  4. Run individual examples as shown in the attached code sample