Skip to main content

How To Instruction

Replication Between Two TimeBase Instances

This instruction describes how to live replicate data from one TimeBase instance to another via TimeBase Shell in Docker.

For this example we will use three components which are started one by one in the following order:

  1. TimeBase Master - source instance
  2. TimeBase Slave - target instance
  3. Replication - data replication script

Docker Compose

Run Docker Compose with the configurations for each of the components. Mounted volumes of each component override common configurations in case other than common values are supplied.

# Docker Compose example for TimeBase Enterprise Edition

version: '3.5'
services:
timebase-master: # TimeBase Master settings
image: "registry.deltixhub.com/quantserver.docker/timebase/server:5.5.55" # path to TimeBase Docker image you are going to use
container_name: timebaseMaster
environment:
- TIMEBASE_SERIAL=${SERIAL_NUMBER}
- JAVA_OPTS=
-Xms8g
-Xmx8g
-DTimeBase.version=5.0
-XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=/timebase-home/timebase.hprof
-Xlog:gc=debug:file=/timebase-home/GClog-TimeBase.log:time,uptime,level,tags:filecount=5,filesize=100m
ports:
- 8011:8011 # TimeBase Master server port
deploy:
resources:
limits:
memory: 10G
reservations:
cpus: '1'
memory: 4G
stop_grace_period: 5m
ulimits:
nofile:
soft: 65536
hard: 65536
volumes:
- "./timebase-home:/timebase-home"


timebase-slave: # TimeBase Slave Settings
image: "registry.deltixhub.com/quantserver.docker/timebase/server:5.5.55" # path to TimeBase Docker image you are going to use
environment:
- TIMEBASE_SERIAL=${SERIAL_NUMBER}
- JAVA_OPTS=
-Xms8g
-Xmx8g
-DTimeBase.version=5.0
-XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=/timebase-home/timebase.hprof
-Xlog:gc=debug:file=/timebase-home/GClog-TimeBase.log:time,uptime,level,tags:filecount=5,filesize=100m
ports:
- 8012:8012 # TimeBase Slave server port
deploy:
resources:
limits:
memory: 10G
reservations:
cpus: '1'
memory: 4G
stop_grace_period: 5m
ulimits:
nofile:
soft: 65536
hard: 65536
volumes:
- "./timebase-home:/timebase-home"

replication: # Replication settings
image: "registry.deltixhub.com/quantserver.docker/timebase/server:5.5.55" # path to TimeBase Docker image you are going to use
tty: true # must be set to True
environment:
- TIMEBASE_SERIAL=${SERIAL_NUMBER}
- JAVA_OPTS=
-Xms8g
-Xmx8g
-DTimeBase.version=5.0
depends_on:
- timebase-master
- aggregator
- timebase-slave
ports:
- 8013:8013
deploy:
resources:
limits:
memory: 10G
reservations:
cpus: '1'
memory: 1G
stop_grace_period: 5m
ulimits:
nofile:
soft: 65536
hard: 65536
command: ["/replication/start-replication.sh"]
volumes:
- "./replication/start-replication.sh" # script that starts TimeBase Shell
- "./replication/replication.script" # script that starts data replication

TimeBase Master Configurations

Custom admin.properties parameters:

# TimeBase Master host name
TimeBase.host=timebase-master

TimeBase Slave Configurations

Custom admin.properties parameters:

# TimeBase Slave host name
TimeBase.host=timebase-slave

Replication Configurations

  1. Create and mount start-replication.sh script to start using TimeBase Shell. See Guide for your reference.

    # Example
    #!/bin/sh
    /timebase-server/bin/tickdb.sh -exec exec /replication/replication.script
  2. Create and mount replication.script script to run live replication via TimeBase Shell. See Guide for your reference. See docker-compose.yaml example earlier in this section.

    # Example
    set db dxtick://username:password@timebase-slave:8011 # Username and password from UAC settings
    open
    set srcdb dxtick://username:password@timebase-master:8011 # Username and password from UAC settings
    set cpmode live
    set reload prohibit
    set srcstream <stream_name>
    replicate <stream_name>
info

Refer to Replication for more information about TimeBase replication options.

User Access Control

In admin.properties configuration file specify the following parameters to enable UAC:

QuantServer.security.rulesConfig=uac-access-rules.xml //users authorization information
QuantServer.security.userDirectoryConfig=uac-file-security.xml //users authentication information
QuantServer.security=FILE
info

Refer to UAC Rules for more information.


Install TimeBase as a systemd Service

Create systemd file to install TimeBase as a systemd service on Linux. QuantServer must be installed under /deltix/QuantServer.

Example of a systemd file
[Unit]
Description=Deltix Timebase Service on port 8011
After=syslog.target

[Service]
Environment=DELTIX_HOME=/deltix/QuantServer
Environment="JAVA_OPTS=-Xms8g -Xmx8g -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCDateStamps -Xloggc:/mnt/data/deltix/QuantServerHome/logs/GClog-TimeBase.log -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/mnt/data/deltix/timebase.hprof"
User=centos
WorkingDirectory=/mnt/data/deltix/QuantServerHome
ExecStart=/deltix/QuantServer/bin/tdbserver.sh -home /mnt/data/deltix/QuantServerHome
SyslogIdentifier=timebase
SuccessExitStatus=143
TimeoutSec=600
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

Enable SSL

info

Refer to Deployments to learn how to enable SSL in Docker Compose.

SSL uses the concept of private and public keys to authenticate a client on a server and to encrypt the transmitted data. Private keys are stored on the server and must not be shared with anybody. Public keys may be distributed without limitations among all clients to connect to the TimeBase Server.

Keystores are used to store and access private and/or public keys. Tomcat supports JKS, PKCS11 and PKCS12 keystore formats. JKS (Java KeyStore) is Java standard keystore format, that can be created and edited with Java keytool. PKCS11 and PKCS12 are Internet standard keystore formats, that can be edited with OpenSSL and Microsoft Key-Manager.

The most common ways of using keystores in TimeBase:

Both Tomcat server and clients use same file localhost.jks with private key and public key (self-signed certificate).

You can find localhost.jks on subversion in cert folder.

Tomcat server and clients use own keystores with the same public key (self-signed certificate).

Tomcat server uses authorized (by certification center) certification authority (CA) certificate. Clients use default Java cacert keystore with the most common CA certificates. Users prefer this particular way.

Setup SSL for TimeBase Server

  1. Create/export keystore file. Supported formats: JKS, PKCS11 and PKCS12.

    "%JAVA_HOME%\bin\keytool.exe" -genkey -selfcert -alias deltix -keystore "keystore.jks" -storepass deltix
  2. Change admin.properties file as defined below:

    TimeBase.enableSSL=true
    TimeBase.sslInfo=QSKeystoreInfo
    TimeBase.sslInfo.keystoreFile=<path to keystore file>
    TimeBase.sslInfo.keystorePass=<keystore file password>
    TimeBase.sslInfo.sslForLoopback=false
    TimeBase.sslInfo.sslPort=8022

Setup SSL for Client TimeBase Connections

This system properties should be defined for the client process if self-signed certificate is used:

  1. deltix.ssl.clientKeyStore=<path to keystore file>
  2. deltix.ssl.clientKeyStorePass=<keystore file password>

Creating Keystores

  1. To use SSL for testing or for internal usage, get it from subversion cert/localhost.jks
  2. To use self-signed certificate, follow the next steps:
    1. Generate server keystore server.jks with private key: $ keytool -genkey -alias server_keystore -keystore server.jks. Output:
      • Enter keystore password: foobar
      • What is your first and last name? - [Unknown]: localhost (or your domain name). Important: In What is your first and last name? specify your domain name for which you want to create certificate (if your server is loopback, just specify localhost).
      • What is the name of your organizational unit? - [Unknown]:
      • What is the name of your organization? - [Unknown]:
      • What is the name of your City or Locality? - [Unknown]:
      • What is the name of your State or Province? - [Unknown]:
      • What is the two-letter country code for this unit? - [Unknown]: US
      • Is CN=localhost, OU=, O=, L=, ST=, C=US correct? - [no]: yes
      • Enter key password for <server_keystore> - (RETURN if same as keystore password): foobar.
    2. Generate server.jks public key server.cert: $ keytool -export -alias server_keystore -file server.cert -keystore server.jks.
    3. Create client keystore client.jks: $ keytool -genkey -alias client_keystore -keystore client.jks.
    4. Add server.cert to client.jks: $ keytool -import -alias client_keystore -file server.cert -keystore client.jks.
    5. You have server.jks and client.jks with the same server.cert. Now you can specify server.jks in QSArchitect and client.jks in deltix.ssl.clientKeyStore.
  3. User can generate and use certificate officially singed by Certification Center. In this case clients (also Web browsers) skip keystores. To create such certificate:
    1. Repeat step 2.1
    2. Generate Certificate Signing Request (CSR), which needs to get authorized certificate from certification center: $ keytool -certreq -keyalg RSA -alias server_keystore -file server.csr -keystore server.jks.
    3. Send CSR (server.csr) to certification centers like Verisign or Trust Center and get an authorized certificate and root certificate of the certificate center.
    4. Add root certificate to server.jks: $ keytool -import -alias root -keystore server.jks -trustcacerts -file <filename_of_the_root_certificate>.
    5. Add the authorized certificate to server.jks: $ keytool -import -alias server_keystore -keystore server.jks -file <your_certificate_filename>.
    6. Now you can specify server.jks in QSArchitect. Client will use cacert automatically to configure SSL.

SSL Termination

There are two ways of implementing HTTPS connection termination for TimeBase:

  • Put a load balancer with SSL termination (also known as SSL offloading) in front of the TimeBase instance. In this scenario, DNS lookups must resolve the load balancer. The exact configuration depends on the load balancer you use.
  • If you run TimeBase on AWS, use AWS deployment guide to configure the Application Load Balancer (ALB).

Define TimeBase.network.VSClient.sslTermination=true TimeBase system property to configure the TimeBase Server to process HTTP connections from the Load Balancer.


Repair TimeBase

This instructions gives TimeBase repair guidelines for administrators and DevOps engineers.

Repair Streams in 5.X Data Format

TimeBase streams written in new 5.X format are located in QuantServerHome/timebase directory. One stream per directory, where directory name corresponds to escaped stream key.

# Example:

/deltix/QuantServer/bin/tbrecover.sh -home /deltix/QuantServerHome

Deleting index files ...

/deltix/QuantServerHome/tickdb/__a_xa_95___o_utput_95___t_ags_95___r_aw_95_2_95_3_95_4/data/index.dat deleted.
/deltix/QuantServerHome/tickdb/__a_xa_95___o_utput_95___e_xec__[]m_arket_95___r_aw_95_2_95_3_95_4/data/index.dat deleted.
/deltix/QuantServerHome/tickdb/__a_xa_95___o_utput_95___vwap__95___r_aw_95_2_95_3_95_4/data/index.dat deleted.
/deltix/QuantServerHome/tickdb/__a_xa_95___o_utput_95___aep__95___r_aw_95_2_95_3_95_4/data/index.dat deleted.
/deltix/QuantServerHome/tickdb/__a_xa_95___o_utput_95___qwap__95___r_aw_95_2_95_3_95_4/data/index.dat deleted.

TimeBase will self-repair streams on startup in most cases. For harder cases, there is a tdbrecover utility:

/bin/tbrecover.sh -home <path to QSHome>

This utility will drop all indexes and force TimeBase Server rebuild them on the next start - refer to the attached example.

Repair Streams in 4.X Data Format

# Repair session example:

==> open /deltix/QuantServerHome/tickdb
TimeBase '/deltix/QuantServerHome/tickdb' is opened to be checked/repaired.
==> set ri true
rebuildIndexes: true
==> set ar true
autoResolve: true
==> scan full
Full scanning...
Verify database /deltix/QuantServerHome/tickdb
1 May 13:25:36.672 INFO [RepAppShell#Worker] Start scan folders.
Verify the structure of durable stream calendar2020
Verify the structure of durable stream NYMEX
Verify the structure of durable stream version 5.0 Axa_Output_Tags_Raw_2_3_4
1 May 13:25:37.399 INFO [RepAppShell#Worker] Found 12 streams.
1 May 13:25:37.399 INFO [RepAppShell#Worker] Scan finished.

TimeBase streams written in new 4.X format are located in QuantServerHome/tickdb directory. One stream per directory, where directory name corresponds to escaped stream key. TimeBase will self-repair streams on startup in most cases. For harder cases, there is a tbrepshell utility:

/deltix/QuantServer/bin/tbrepshell.sh

Run help command to view Repair Shop features:

==> help

tip

You may run scan full twice to ensure there are no remaining errors.

info

Refer to Repair Shop for additional information.


Topic Tuning

  • Dedicated CPU core for each low-latency thread
    • Isolate CPU cores on OS level
    • Provide these cores to JVM using “taskset” Linux command
    • Set CPU core affinity for the low-latency Java threads from Java program
  • Avoid allocations on main path, minimize GC activity
  • Set reasonable Guarantied Safepoint Interval (depends on app)

Backup Stream Data


# Step 1

set stream one_hour
desc
DURABLE STREAM "one_hour" (
CLASS "deltix.timebase.api.messages.DacMessage" 'Data Access Control Message' (
STATIC "entitlementId" 'Entitlement ID' BINARY = NULL
);
CLASS "deltix.timebase.api.messages.MarketMessage" 'Market Message' UNDER "deltix.timebase.api.messages.DacMessage" (
"currencyCode" 'Currency Code' INTEGER SIGNED (16),
"originalTimestamp" 'Original Timestamp' TIMESTAMP,
"sequenceNumber" 'Sequence Number' INTEGER,
"sourceId" 'Source Id' VARCHAR ALPHANUMERIC (10)
)
COMMENT 'Most financial market-related messages subclass this abstract class.';
CLASS "deltix.wct.marketdata.historical.WCTBarMessage" UNDER "deltix.timebase.api.messages.MarketMessage" (
"closeAsk" FLOAT DECIMAL64,
"closeAskQuoteId" BINARY,
"closeBid" FLOAT DECIMAL64,
"closeBidQuoteId" BINARY,
"closeTimestamp" TIMESTAMP,
"exchangeCode" VARCHAR ALPHANUMERIC (10),
"highAsk" FLOAT DECIMAL64,
"highBid" FLOAT DECIMAL64,
"highMid" FLOAT DECIMAL64,
"isShadow" BOOLEAN NOT NULL,
"lowAsk" FLOAT DECIMAL64,
"lowBid" FLOAT DECIMAL64,
"lowMid" FLOAT DECIMAL64,
"openAsk" FLOAT DECIMAL64,
"openBid" FLOAT DECIMAL64,
"openTimestamp" TIMESTAMP,
"volume" FLOAT DECIMAL64
);
)

# Step 2

export /var/lib/market-data-node/one_hour.qsmsg

This document explains how to use TimeBase Shell CLI to backup and restore streams using a command line interface.

  1. Run command /timebase-server/bin/ticksb.sh

  2. Execute the following commands in TimeBase Shell to connect to TimeBase:

    set db dxtick://localhost:8011
    open
  3. Record a stream format (also use to re-create stream) - see Step 1 in the code sample

  4. Copy stream data into a local file - see Step 2 in the code sample

info

Refer to TimeBase Shell CLI for more information.


Restore Stream Data

# Example

create DURABLE STREAM "one_hour_restored" (
CLASS "deltix.timebase.api.messages.DacMessage" 'Data Access Control Message' (
STATIC "entitlementId" 'Entitlement ID' BINARY = NULL
) NOT INSTANTIABLE;
CLASS "deltix.timebase.api.messages.MarketMessage" 'Market Message' UNDER "deltix.timebase.api.messages.DacMessage" (
"currencyCode" 'Currency Code' INTEGER SIGNED (16),
"originalTimestamp" 'Original Timestamp' TIMESTAMP,
"sequenceNumber" 'Sequence Number' INTEGER,
"sourceId" 'Source Id' VARCHAR ALPHANUMERIC (10)
) NOT INSTANTIABLE
COMMENT 'Most financial market-related messages subclass this abstract class.';
CLASS "deltix.wct.marketdata.historical.WCTBarMessage" UNDER "deltix.timebase.api.messages.MarketMessage" (
"closeAsk" FLOAT DECIMAL64,
"closeAskQuoteId" BINARY,
"closeBid" FLOAT DECIMAL64,
"closeBidQuoteId" BINARY,
"closeTimestamp" TIMESTAMP,
"exchangeCode" VARCHAR ALPHANUMERIC (10),
"highAsk" FLOAT DECIMAL64,
"highBid" FLOAT DECIMAL64,
"highMid" FLOAT DECIMAL64,
"isShadow" BOOLEAN NOT NULL,
"lowAsk" FLOAT DECIMAL64,
"lowBid" FLOAT DECIMAL64,
"lowMid" FLOAT DECIMAL64,
"openAsk" FLOAT DECIMAL64,
"openBid" FLOAT DECIMAL64,
"openTimestamp" TIMESTAMP,
"volume" FLOAT DECIMAL64
);
)
OPTIONS (LOCATION = '/data'; FIXEDTYPE; PERIODICITY = 'IRREGULAR'; HIGHAVAILABILITY = FALSE)
/
  1. To re-create a stream, simply add create in the beginning and newline/slash at the end :

    create <QQL>
    /
    tip

    In case a stream schema contains abstract classes, you need to manually patch QQL we obtained during backup to annotate them.

    The correct syntax is: CLASS ( … ) NOT INSTANTIABLE.

  2. Run the following commands in Shell to load the data from a file:

    set stream one_hour_restored
    set src /var/lib/market-data-node/one_hour.qsmsg
    import
    tip

    Refer to TimeBase Shell for more information.


Data Sizing

This technical note provides TimeBase disk space utilization statistics.

RECOMMENDATIONS

  • Use 5.X storage format (compression results in 8-10x space savings compared to 4.X).
  • If you must use 4.X format, consider using MAX distribution factor for streams containing small number of contracts (e.g. less than 500). For 8x space savings, use OS-level compression to store large data volumes. TimeBase allows placing individual streams in dedicated volumes.
  • Make sure TimeBase stream schema defines all unused fields as static (rather than filling with NULL in each message).
  • In some cases, you may want to use classic Level1/Level 2 market data format compared to newer Universal Market Data Format.
Market Data Type5.x Storage Format4.x Storage Format
Level 1 (BBO+Trades)6.8 Mb/1 million messages32 Mb/1 million messages
Level 2 (Market by Level) - 10 levels12.7 Mb/1 million messages90 Mb / 1 million messages

5.X Format Storage

LEVEL 1 MARKET DATA

Best-Bid-Offer and Trades data sample:

  • CME (NYMEX) Crude Oil (CL) and Natural Gas (NG) FUTURE contracts.
ParameterValue
Sample time range6 market days (4/23/2020-5/1/2020)
Number of Level 1 messages stored26,896,346
Disk space184,147,968 bytes
6.84 Mb per million of L1 messages

LEVEL 2 MARKET DATA

Market-by-Level and Trades data sample:

  • CME (NYMEX) Crude Oil (CL) and Natural Gas (NG) FUTURE contracts.
  • Market depth: 10 levels
ParameterValue
Sample time range30 market days (April 2020)
Number of Level 2 messages stored459 million
Disk space5.57 Gb
12.75 Mb per million of L2 messages or 13 bytes per message

4.X Format Storage

LEVEL 1 MARKET DATA

Best-Bid-Offer and Trades data sample:

  • CME (NYMEX) Crude Oil (CL) and Natural Gas (NG) FUTURE contracts.
ParameterValue
Sample time range6 market days (4/23/2020-5/1/2020)
Number of Level 1 messages stored25,873,909
Disk space807 Mb
32 Mb per million of L1 messages

LEVEL 2 MARKET DATA

Market-by-Level and Trades data sample:

  • CME (NYMEX) Crude Oil (CL) and Natural Gas (NG) FUTURE contracts.
  • Market depth: 10 levels
ParameterValue
Sample time range6 market days (4/23/2020-5/1/2020)
Number of Level 2 messages stored43,617,735
Disk space4,046,630,912 bytes/541,920,890 bytes (gzip size)
90 Mb per million of L2 messages or 93 bytes per message

4.3 Classic Message Format

LEVEL 2 MARKET DATA

  • FX market from 10 LPs for 27 currency pairs
  • Market depth: 10 levels
ParameterValue
Sample time range1 market day (10/21/2014)
Number of Level 2 messages stored834,503,715
Disk space85,523,931,136 bytes
100 Mb per million of L2 messages or 102 bytes per message

Annexes

LEVEL2 STREAM SAMPLES

Snapshot message sample:

Incremental update message:


Data Partitioning

In TimeBase you can distribute data within a single stream to the different locations, so-called spaces. TimeBase Spaces are like partitions or shards in other systems. Each stream Space is a separate disk location, storing data for independent set of symbols (instruments). At the same time, stream holds consolidated meta information from all spaces.

For example, when two independent message producers write data into the same TimeBase stream, it is convenient to assign unique set of symbols for each them. Each producer will write data into a separate space. Consumers can read all data from all spaces or a particular space.

API


// Example 1

LoadingOption options = new LoadingOptions();
options.space = ‘”partition1”;
TickLoader loader = db.createLoader(options)
loader.send(message)

// Example 2

SelectionOptions options = new SelectionOptions()
options.space=”partition1”
TickCursor cursor = stream.select(time, options)

// Example 3

deltix.qsrv.hf.tickdb.pub.TickStream.listSpaces()
deltix.qsrv.hf.tickdb.pub.TickStream.listEntities(java.lang.String)
deltix.qsrv.hf.tickdb.pub.TickStream.getTimeRange(java.lang.String)

Spaces use textual identifiers. Start writing into a space to create it. By default, all data is written into a NULL space.

  • Define String value for the deltix.qsrv.hf.tickdb.pub.LoadingOptions.space attribute.
  • Create deltix.qsrv.hf.tickdb.pub.TickLoader using this options - see attached code example 1.
  • Use deltix.qsrv.hf.tickdb.pub.SelectionOptions.space to select data from a specific space - see attached code example 2.
  • Use the provided in the code example 3 methods to query stream meta-information about spaces.
info

Use Cases

  • Classic use case – several producers write data in parallel, each using a unique set of the symbols, to increase the writing performance.
  • Parallel data processing – several producers write data into a stream using independent time ranges.
  • Use data partitions to store data.

Restrictions

  • Number of spaces should be limited to 10-15 to gain optimal performance result while querying whole data stream.
  • Working with spaces, consider giving more heap memory to TimeBase Server. Each consumer reading the entire stream will require an additional 8-10M of memory per space.
info

Refer to Data Distribution for additional information in relation to spaces.


Copying Tick Database Streams

To copy streams from one tick database to another - just copy the stream folder from one tick database to another.

Steps:

  1. Stop both instances of the TimeBase services that you will be copying stream(s) from and to.
  2. Copy stream(s) to the new location.
  3. Restart services.

Distributed Tick Database (Enterprise)

This section describes the process of creating a Distributed Tick Database.

As mentioned in the Data Architecture, there are several reasons for distributing a tick database:

  • Spread data among multiple storage devices (not connected into a disk array) to increase the amount of data that can be stored.
  • Spread data among multiple storage devices (not connected into a disk array) to increase the data retrieval performance.
  • Overcome the inefficiencies of a specific file systems related to a large number of files being placed in a single folder. For example, it is undesirable to place more than a few thousand files into a single NTFS folder. NTFS performance begins to fall exponentially after folder size exceeds 4000 files. In this case, using multiple folders even on the same physical device is beneficial.

We use a round robin approach in selecting storage locations when a new stream is created.

  • Create a default tick database.
  • Create distributed folders outside the default configuration.
  • Create the tick database catalog file, dbcat.txt.

Streams will be stored using the first folder location defined within the catalog file. Then stream rotate through the remaining folders and end at the parent tick database folder that is a part of the home directory.

Steps: Must Be Edited

  1. Launch QuantServer Architect and create a new QuantServerHome folder to manage. This example assumes you are working with the default QuantServerHome location which is C:\Program Files\Deltix\QuantServerHome.

  2. Configure TimeBase service and apply changes. This process will generate the default tick database folder within the QuantServerHome directory.

  3. Stop the TimeBase service if it is running. Required if the service was configured as AUTO.

  4. Create the distributed folders.

  5. Navigate to the tickdb folder within the QuantServerHome directory that is being managed.

  6. Create the dbcat.txt catalog file within the tickdb folder.

  7. Modify the file and add absolute paths to the distributed folders. There are no restrictions as for folder names.

    # example
    D:\tickdb1
    E:\myDistributedTickdb
  8. Restart TimeBase service and launch TimeBase Administrator.

As you populate your database with new streams, you will notice the first stream will be created within your D:\tickdb1 folder, the second within your E:\myDistributedTickdb folder, and the third within your parent QuantServerHome\tickdb folder.


Migrate to 5.X Format

This instruction explains how to migrate TimeBase to 5.X storage format.

TimeBase used 4.3 (classic) storage format prior to 5.4 format release. The classic format has been battle tested for 10 years. However, extensive testing (including a 1+ year of production use) conducted on a new format gives us confidence to recommend it for all TimeBase clients.

Key 5.X format advantages:

  • 5-10 times less space is required to store data due to built-in SNAPPY compression.
  • More efficient support of streams with large number of instruments.
  • Ability to edit historical data (insert/modify/delete).

We offer a utility to convert streams to 5.X format.

Getting Ready

  1. Stop TimeBase and all dependent processes.
  2. Backup your streams. Utility preserves a copy of original data under QuantServerHome/tickdb/<streamKey> directory. However, for safety precautions it is recommended to create an additional backup for critical production sites.
  3. Make sure you have DELTIX_HOME environment variable set to your TimeBase/QuantServer installation. This variable is already defined if you use a Docker image - export DELTIX_HOME=/deltix/QuantServer

TBMIGRATOR Utility

# Shell CLI Migration Commands

-home <qshome> Path to QuantServer Home. Required.

-streams <stream1,..,streamN>
Comma-separated list of streams. Optional.
If omitted then all streams will be migrated.

-streamsRegexp <streamsRegexp>
Regular expression to lookup streams for migration.
Optional. If defined -streams arg will be ignored.

-compare If defined streams will be compared after migration.

-transient If defined transient streams will be migrated (created).

# Example of converting NYMEX and NYMEX_L1 streams:

$ /deltix/QuantServer/bin/tbmigrator.sh -home /deltix/QuantServerHome -streams NYMEX,NYMEX_L1
May 01, 2020 10:05:38 PM deltix.qsrv.hf.tickdb.tool.TBMigrator$TBStreamMigrator migrate
INFO: Migrate streams from "/deltix/QuantServerHome/tickdb" to "/deltix/QuantServerHome/timebase"
May 01, 2020 10:05:38 PM deltix.ramdisk.RAMDisk createCached
INFO: Initializing RAMDisk. Data cache size = 100MB.
1 May 22:05:39.061 INFO [main] [Axa_Output_AEP_Raw_2_3_4] open: checking data consistency...
1 May 22:05:39.105 INFO [main] [Axa_Output_ExecByMarket_Raw_2_3_4] open: checking data consistency...
1 May 22:05:39.111 INFO [main] [Axa_Output_QWAP_Raw_2_3_4] open: checking data consistency...
1 May 22:05:39.131 INFO [main] [Axa_Output_Tags_Raw_2_3_4] open: checking data consistency...
1 May 22:05:39.137 INFO [main] [Axa_Output_VWAP_Raw_2_3_4] open: checking data consistency...
May 01, 2020 10:05:39 PM deltix.ramdisk.RAMDisk createCached
INFO: Initializing RAMDisk. Data cache size = 100MB.
May 01, 2020 10:05:39 PM deltix.qsrv.hf.tickdb.tool.TBMigrator$TBStreamMigrator migrate
INFO: Start [NYMEX] stream migration...
May 01, 2020 10:06:24 PM deltix.qsrv.hf.tickdb.tool.TBMigrator$TBStreamMigrator migrate
INFO: Start [NYMEX_L1] stream migration...
1 May 22:06:24.709 INFO [main] Replication completed.
1 May 22:06:47.245 INFO [main] Replication completed.
1 May 22:06:47.359 INFO [main] Writers successfully stopped, files in queue = 0
1 May 22:06:47.360 INFO [main] Writers successfully stopped, files in queue = 0
May 01, 2020 10:06:47 PM deltix.qsrv.hf.tickdb.tool.TBMigrator$TBStreamMigrator migrate
INFO: Streams migration successfully finished.

All command line tools provided with TimeBase support -help argument.

You can convert all streams (default) or limit the tool to specific streams using -streams or -streamsRegexp arguments - see attached example. Streams in 5.X format are located in QuantServerHome/timebase.

Let's compare size of directories in format 4.X /tickdb and new 5.X /timebase format:

[centos@ip-10-0-0-54 QuantServerHome]$ du tickdb/__nymex_ -sh
3.9G tickdb/__nymex_
[centos@ip-10-0-0-54 QuantServerHome]$ du timebase/__nymex_ -sh
586M timebase/__nymex_
tip

Converted to 5.X format, a stream takes 6.65x less space!

info

Refer to TimeBase Shell for additional information about available commands.


Receive a Copy of Last Known Values

TimeBase provides an ability to receive a copy of last known values for new stream subscribers. Streams that use this feature are called unique streams.

Example

Consider a case when somebody publishes portfolio positions into TimeBase. Every time a position changes, we send a new message into TimeBase. For rarely traded instruments, there may be days or months between changes. If we want to know positions in all instruments, it may be very demanding to process the entire stream backwards until we see the last message in each instrument. Unique streams values solve this problem by keeping a cache of last position for each instrument in the TimeBase server cache. Any new subscriber will receive a snapshot of all known positions immediately before any live messages.

Message Identity

By default, messages are identified and cached using a Symbol and Instrument Type. However, in some cases, unique streams have extended identity. You may extend the default message identity with one or more additional custom fields.

tip

For keeping per-account positions in the example above, the message identity may include a custom field Account.

  • Additional fields should be annotated with @PrimaryKey (from Deltix.timebase.api package).
  • For Luminary stream schemas use [PrimaryKey] decorator (from Deltix.Timebase.API namespace).

Producer API

Special action or API changes are not required to produce data into a Unique stream.

Consumer API

Special action is not required from a consumer to subscribe to Unique streams. Just be aware that the initial set of messages you receive may not be live data but rather a snapshot of last messages. You may inspect timestamps to distinguish live data and historical snapshot. A more advanced technique may use Real Time notification marker.


AMQP

caution

Enterprise Edition feature.

AMQP makes it easy to integrate third-party applications with TimeBase. Many programming languages have libraries that support AMQP: JavaScript, Java, .NET, Ruby, Python, PHP, Objective-C, C/C++, Erlang, and others. TimeBase AMQP support relies on Proton-J engine and supports AMQP specification version 1.0. Proton-J is distributed under Apache Software Foundation 2.0 License. AMQP support is available in QuantServer starting from version 4.3.31C.

Configuration

  1. Start QuantServer Architect.
  2. Click Edit and select TimeBase component on the diagram.
  3. Select Advanced tab in the right side Properties panel.
  4. Enable AMQP checkbox and specify a unique port.
  5. Click Apply, Start, and then Done.

Operation

AMQP clients can be divided into two types:

  • Producers - write data into the TimeBase stream.
  • Consumers - read data from the TimeBase stream.

In both cases TimeBase stream acts as message destination/source.

Producers

Message produces that write into TimeBase specify stream name as destination.

Consumer

Message consumers that read from TimeBase use the following syntax to specify message source:

streamName?param1=value1&param2=value2

The syntax is somewhat similar to HTTP request syntax. Stream is specified first, followed by several optional parameters:

  • live: this parameter controls Live and Historical data streaming mode. In live mode data never ends (consumer waits for new data to appear). In historical data streaming mode consumer completes, once all currently available data is processed (see End-of-Data section below). Example: live=true. Consumer operates by default in historical mode.

  • type: this parameter controls what types of messages will be read by a cursor. If not specified, a cursor reads all types of messages.

    # Example
    type=deltix.timebase.api.messages.TradeMessage&
    type=deltix.timebase.api.messages.BestBidOfferMessage
  • time: this parameter sets the start time (in GMT timezone). Time format: YYYY-MM-DD hh:mm:ss.S, where:

    • YYYY: four-digit year
    • MM: two-digit month (01=January, etc.)
    • DD: two-digit day of month (01 through 31)
    • hh: two digits of hour (00 through 23) (am/pm NOT allowed)
    • mm: two digits of minute (00 through 59)
    • ss: two digits of second (00 through 59)
    • S: milliseconds (0..999)
    • Example: time=2001-12-29 12:34:56.000.
  • symbol: this parameter indicates symbol of received messages. If not specified, then the cursor reads all symbols from messages. Example: symbol=AAPL&symbol=DLTX.

  • instrumentType: this parameter indicates instrumentType of received messages. instrumentType can be one of EQUITY, BOND, FUTURE, OPTION, FX, INDEX, ETF, CUSTOM, etc. Current version supports only queries that retrieve same type instruments. If this parameter is omitted, default value is EQUITY. Example: instrumentType=BOND.

  • format: this parameter indicates the format of TimeBase messages. Possible formats:

    • map
    • JSON (default) - Current version of AMQP bridge doesn't support JSON for producers.
    • Example: format=map.
  • heartbeat: this parameter specifies heartbeat interval (in seconds) for live data consumers. Default value is 0 (no heartbeats).

tip

Parameters symbol and type can be specified multiple times.

Message Format


// Example Node.js:

var message = {
"type" : "deltix.timebase.api.messages.L2Message",
"timestamp" : "2016-08-18 17:16:58.202",
"symbol":"JSONP",
"instrumentType":"EQUITY",
"exchangeCode":"NY4",
"currencyCode":1,
"sequenceNumber":1,
"isImplied":true,
"isSnapshot":true,
"sequenceId":1,
"actions":[
{
"type" : "deltix.timebase.api.messages.Level2Action",
"level":1,
"isAsk":false,
"action":"UPDATE",
"price":9.5,
"size":19.5,
"numOfOrders":1
},
{
"type" : "deltix.timebase.api.messages.Level2Action",
"level":1,
"isAsk":true,
"action":"DELETE",
"price":21.5,
"size":11.5,
"numOfOrders":1
}
]
};
sender.send (message);

// Best Bid Offer Message in JSON format:

{
"symbol":"AAPL",
"instrumentType":"BOND",
"timestamp":"2016-07-21 18:19:13.112",
"sequenceNumber":null,
"bidPrice":19.0,
"bidSize":3.0,
"offerPrice":55.0,
"offerSize":10.0
}

// Trade Message in JSON format:

{
"symbol":"TestR",
"instrumentType":"BOND",
"timestamp":"2016-07-21 18:19:13.112",
"sequenceNumber":null,
"price":50.0,
"size":3.0,
"condition":"msg #18",
"aggressorSide":null,
"beginMatch":true,
"netPriceChange":9.0,
"eventType":null
}


// Example Java JMS:

MapMessage message = session.createMapMessage();
message.setString("type","deltix.qsrv.hf.pub.BarMessage");
message.setDouble("open", 500 ) ;
message.setString("symbol","Test");
message.setString("instrumentType","BOND");

Messages in AMQP can transmit only attribute types defined in the specification.

These include:

  • null
  • boolean
  • ubyte
  • ushort
  • uint
  • ulong
  • byte
  • short
  • int
  • long
  • float
  • double
  • decimal32
  • decimal64
  • decimal128
  • char
  • timestamp
  • uuid
  • binary
  • string
  • symbol
  • list
  • map
  • array

Please note that timestamps can be transmitted either as a string in YYYY-MM-DD hh:mm:ss.SSS format in GMT time zone, or as a number (which represents Unix epoch time). TimeBase uses application-data part of the AMQP message to store all message content. Application data is just like a Map where each message field is stored as { field-name, field-value } pair.

Optional message fields which are set to Null will be skipped from TimeBase messages.

tip

When writing messages into a polymorphic stream, include type attribute that specifies the type of a message. Messages that contain nested data types should also include type attribute for each polymorphic field, as shown in the attached node.js code sample.

tip

Polymorphic streams contain messages of different types (as opposed to fixed-type streams).

End-of-Data and Heartbeats

A special message is sent to indicate end-of-data when consumer is reading a stream in historical mode. This message means that all the requested messages are consumed. End-of-data message is a message with application-data set to AmqpValue(null). Similarly, when consumer is reading stream in live mode, the same kind of message is sent as a periodic heartbeat, when all available messages are consumed.

info

Current version broadcasts such heartbeats every second.

Node.js Sample


node examples/producerBar.js
node examples/producerL2.js
node examples/consumerBar.js
node examples/consumerLiveL2.js
node examples/test.js

  1. Download and install Node.js.
  2. Download and unzip Deltix TimeBase AMQP examples package.
  3. Run command shell from the root of uncompressed folder and execute command: npm install.
  4. Run individual examples as shown in the attached code sample.