Skip to main content

How To Instructions

Deployment

Install TimeBase as a systemd Service

Create systemd file to install TimeBase as a systemd service on Linux. QuantServer must be installed under /deltix/QuantServer.

Example of a systemd file
[Unit]
Description=Deltix Timebase Service on port 8011
After=syslog.target

[Service]
Environment=DELTIX_HOME=/deltix/QuantServer
Environment="JAVA_OPTS=-Xms8g -Xmx8g -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCDateStamps -Xloggc:/mnt/data/deltix/QuantServerHome/logs/GClog-TimeBase.log -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/mnt/data/deltix/timebase.hprof"
User=centos
WorkingDirectory=/mnt/data/deltix/QuantServerHome
ExecStart=/deltix/QuantServer/bin/tdbserver.sh -home /mnt/data/deltix/QuantServerHome
SyslogIdentifier=timebase
SuccessExitStatus=143
TimeoutSec=600
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

Configuration

Enable SSL

info

Refer to Deployments to learn how to enable SSL in Docker Compose.

SSL uses the concept of private and public keys to authenticate a client on a server and to encrypt the transmitted data. Private keys are stored on the server and must not be shared with anybody. Public keys may be distributed without limitations among all clients to connect to the TimeBase Server.

Keystores are used to store and access private and/or public keys. Tomcat supports JKS, PKCS11 and PKCS12 keystore formats. JKS (Java KeyStore) is Java standard keystore format, that can be created and edited with Java keytool. PKCS11 and PKCS12 are Internet standard keystore formats, that can be edited with OpenSSL and Microsoft Key-Manager.

The most common ways of using keystores in TimeBase:

Both Tomcat server and clients use same file localhost.jks with private key and public key (self-signed certificate).

You can find localhost.jks on subversion in cert folder.

Tomcat server and clients use own keystores with the same public key (self-signed certificate).

Tomcat server uses authorized (by certification center) certification authority (CA) certificate. Clients use default Java cacert keystore with the most common CA certificates. Users prefer this particular way.

Setup SSL for TimeBase Server

  1. Create/export keystore file. Supported formats: JKS, PKCS11 and PKCS12.

    "%JAVA_HOME%\bin\keytool.exe" -genkey -selfcert -alias deltix -keystore "keystore.jks" -storepass deltix
  2. Change admin.properties file as defined below:

    TimeBase.enableSSL=true
    TimeBase.sslInfo=QSKeystoreInfo
    TimeBase.sslInfo.keystoreFile=<path to keystore file>
    TimeBase.sslInfo.keystorePass=<keystore file password>
    TimeBase.sslInfo.sslForLoopback=false
    TimeBase.sslInfo.sslPort=8022

Setup SSL for Client TimeBase Connections

This system properties should be defined for the client process if self-signed certificate is used:

  1. deltix.ssl.clientKeyStore=<path to keystore file>
  2. deltix.ssl.clientKeyStorePass=<keystore file password>

Creating Keystores

  1. To use SSL for testing or for internal usage, get it from subversion cert/localhost.jks
  2. To use self-signed certificate, follow the next steps:
    1. Generate server keystore server.jks with private key: $ keytool -genkey -alias server_keystore -keystore server.jks. Output:
      • Enter keystore password: foobar
      • What is your first and last name? - [Unknown]: localhost (or your domain name). Important: In What is your first and last name? specify your domain name for which you want to create certificate (if your server is loopback, just specify localhost).
      • What is the name of your organizational unit? - [Unknown]:
      • What is the name of your organization? - [Unknown]:
      • What is the name of your City or Locality? - [Unknown]:
      • What is the name of your State or Province? - [Unknown]:
      • What is the two-letter country code for this unit? - [Unknown]: US
      • Is CN=localhost, OU=, O=, L=, ST=, C=US correct? - [no]: yes
      • Enter key password for <server_keystore> - (RETURN if same as keystore password): foobar.
    2. Generate server.jks public key server.cert: $ keytool -export -alias server_keystore -file server.cert -keystore server.jks.
    3. Create client keystore client.jks: $ keytool -genkey -alias client_keystore -keystore client.jks.
    4. Add server.cert to client.jks: $ keytool -import -alias client_keystore -file server.cert -keystore client.jks.
    5. You have server.jks and client.jks with the same server.cert. Now you can specify server.jks in QSArchitect and client.jks in deltix.ssl.clientKeyStore.
  3. User can generate and use certificate officially singed by Certification Center. In this case clients (also Web browsers) skip keystores. To create such certificate:
    1. Repeat step 2.1
    2. Generate Certificate Signing Request (CSR), which needs to get authorized certificate from certification center: $ keytool -certreq -keyalg RSA -alias server_keystore -file server.csr -keystore server.jks.
    3. Send CSR (server.csr) to certification centers like Verisign or Trust Center and get an authorized certificate and root certificate of the certificate center.
    4. Add root certificate to server.jks: $ keytool -import -alias root -keystore server.jks -trustcacerts -file <filename_of_the_root_certificate>.
    5. Add the authorized certificate to server.jks: $ keytool -import -alias server_keystore -keystore server.jks -file <your_certificate_filename>.
    6. Now you can specify server.jks in QSArchitect. Client will use cacert automatically to configure SSL.

SSL Termination

There are two ways of implementing HTTPS connection termination for TimeBase:

  • Put a load balancer with SSL termination (also known as SSL offloading) in front of the TimeBase instance. In this scenario, DNS lookups must resolve the load balancer. The exact configuration depends on the load balancer you use.
  • If you run TimeBase on AWS, use AWS deployment guide to configure the Application Load Balancer (ALB).

Define TimeBase.network.VSClient.sslTermination=true TimeBase system property to configure the TimeBase Server to process HTTP connections from the Load Balancer.


Topic Tuning

  • Dedicated CPU core for each low-latency thread
    • Isolate CPU cores on OS level
    • Provide these cores to JVM using “taskset” Linux command
    • Set CPU core affinity for the low-latency Java threads from Java program
  • Avoid allocations on main path, minimize GC activity
  • Set reasonable Guarantied Safepoint Interval (depends on app)

Working with Streams

Repair TimeBase

This instructions gives TimeBase repair guidelines for administrators and DevOps engineers.

Repair Streams in 5.X Data Format

TimeBase streams written in new 5.X format are located in QuantServerHome/timebase directory. One stream per directory, where directory name corresponds to escaped stream key.

# Example:

/deltix/QuantServer/bin/tbrecover.sh -home /deltix/QuantServerHome

Deleting index files ...

/deltix/QuantServerHome/tickdb/__a_xa_95___o_utput_95___t_ags_95___r_aw_95_2_95_3_95_4/data/index.dat deleted.
/deltix/QuantServerHome/tickdb/__a_xa_95___o_utput_95___e_xec__[]m_arket_95___r_aw_95_2_95_3_95_4/data/index.dat deleted.
/deltix/QuantServerHome/tickdb/__a_xa_95___o_utput_95___vwap__95___r_aw_95_2_95_3_95_4/data/index.dat deleted.
/deltix/QuantServerHome/tickdb/__a_xa_95___o_utput_95___aep__95___r_aw_95_2_95_3_95_4/data/index.dat deleted.
/deltix/QuantServerHome/tickdb/__a_xa_95___o_utput_95___qwap__95___r_aw_95_2_95_3_95_4/data/index.dat deleted.

TimeBase will self-repair streams on startup in most cases. For harder cases, there is a tdbrecover utility:

/bin/tbrecover.sh -home <path to QSHome>

This utility will drop all indexes and force TimeBase Server rebuild them on the next start - refer to the attached example.

Repair Streams in 4.X Data Format

# Repair session example:

==> open /deltix/QuantServerHome/tickdb
TimeBase '/deltix/QuantServerHome/tickdb' is opened to be checked/repaired.
==> set ri true
rebuildIndexes: true
==> set ar true
autoResolve: true
==> scan full
Full scanning...
Verify database /deltix/QuantServerHome/tickdb
1 May 13:25:36.672 INFO [RepAppShell#Worker] Start scan folders.
Verify the structure of durable stream calendar2020
Verify the structure of durable stream NYMEX
Verify the structure of durable stream version 5.0 Axa_Output_Tags_Raw_2_3_4
1 May 13:25:37.399 INFO [RepAppShell#Worker] Found 12 streams.
1 May 13:25:37.399 INFO [RepAppShell#Worker] Scan finished.

TimeBase streams written in new 4.X format are located in QuantServerHome/tickdb directory. One stream per directory, where directory name corresponds to escaped stream key. TimeBase will self-repair streams on startup in most cases. For harder cases, there is a tbrepshell utility:

/deltix/QuantServer/bin/tbrepshell.sh

Run help command to view Repair Shop features:

==> help

tip

You may run scan full twice to ensure there are no remaining errors.

info

Refer to Repair Shop for additional information.


Backup Stream Data


# Step 1

set stream one_hour
desc
DURABLE STREAM "one_hour" (
CLASS "deltix.timebase.api.messages.DacMessage" 'Data Access Control Message' (
STATIC "entitlementId" 'Entitlement ID' BINARY = NULL
);
CLASS "deltix.timebase.api.messages.MarketMessage" 'Market Message' UNDER "deltix.timebase.api.messages.DacMessage" (
"currencyCode" 'Currency Code' INTEGER SIGNED (16),
"originalTimestamp" 'Original Timestamp' TIMESTAMP,
"sequenceNumber" 'Sequence Number' INTEGER,
"sourceId" 'Source Id' VARCHAR ALPHANUMERIC (10)
)
COMMENT 'Most financial market-related messages subclass this abstract class.';
CLASS "deltix.wct.marketdata.historical.WCTBarMessage" UNDER "deltix.timebase.api.messages.MarketMessage" (
"closeAsk" FLOAT DECIMAL64,
"closeAskQuoteId" BINARY,
"closeBid" FLOAT DECIMAL64,
"closeBidQuoteId" BINARY,
"closeTimestamp" TIMESTAMP,
"exchangeCode" VARCHAR ALPHANUMERIC (10),
"highAsk" FLOAT DECIMAL64,
"highBid" FLOAT DECIMAL64,
"highMid" FLOAT DECIMAL64,
"isShadow" BOOLEAN NOT NULL,
"lowAsk" FLOAT DECIMAL64,
"lowBid" FLOAT DECIMAL64,
"lowMid" FLOAT DECIMAL64,
"openAsk" FLOAT DECIMAL64,
"openBid" FLOAT DECIMAL64,
"openTimestamp" TIMESTAMP,
"volume" FLOAT DECIMAL64
);
)

# Step 2

export /var/lib/market-data-node/one_hour.qsmsg

This document explains how to use TimeBase Shell CLI to backup and restore streams using a command line interface.

  1. Run command /timebase-server/bin/ticksb.sh

  2. Execute the following commands in TimeBase Shell to connect to TimeBase:

    set db dxtick://localhost:8011
    open
  3. Record a stream format (also use to re-create stream) - see Step 1 in the code sample

  4. Copy stream data into a local file - see Step 2 in the code sample

info

Refer to TimeBase Shell CLI for more information.

Restore Stream Data

# Example

create DURABLE STREAM "one_hour_restored" (
CLASS "deltix.timebase.api.messages.DacMessage" 'Data Access Control Message' (
STATIC "entitlementId" 'Entitlement ID' BINARY = NULL
) NOT INSTANTIABLE;
CLASS "deltix.timebase.api.messages.MarketMessage" 'Market Message' UNDER "deltix.timebase.api.messages.DacMessage" (
"currencyCode" 'Currency Code' INTEGER SIGNED (16),
"originalTimestamp" 'Original Timestamp' TIMESTAMP,
"sequenceNumber" 'Sequence Number' INTEGER,
"sourceId" 'Source Id' VARCHAR ALPHANUMERIC (10)
) NOT INSTANTIABLE
COMMENT 'Most financial market-related messages subclass this abstract class.';
CLASS "deltix.wct.marketdata.historical.WCTBarMessage" UNDER "deltix.timebase.api.messages.MarketMessage" (
"closeAsk" FLOAT DECIMAL64,
"closeAskQuoteId" BINARY,
"closeBid" FLOAT DECIMAL64,
"closeBidQuoteId" BINARY,
"closeTimestamp" TIMESTAMP,
"exchangeCode" VARCHAR ALPHANUMERIC (10),
"highAsk" FLOAT DECIMAL64,
"highBid" FLOAT DECIMAL64,
"highMid" FLOAT DECIMAL64,
"isShadow" BOOLEAN NOT NULL,
"lowAsk" FLOAT DECIMAL64,
"lowBid" FLOAT DECIMAL64,
"lowMid" FLOAT DECIMAL64,
"openAsk" FLOAT DECIMAL64,
"openBid" FLOAT DECIMAL64,
"openTimestamp" TIMESTAMP,
"volume" FLOAT DECIMAL64
);
)
OPTIONS (LOCATION = '/data'; FIXEDTYPE; PERIODICITY = 'IRREGULAR'; HIGHAVAILABILITY = FALSE)
/
  1. To re-create a stream, simply add create in the beginning and newline/slash at the end :

    create <QQL>
    /
    tip

    In case a stream schema contains abstract classes, you need to manually patch QQL we obtained during backup to annotate them.

    The correct syntax is: CLASS ( … ) NOT INSTANTIABLE.

  2. Run the following commands in Shell to load the data from a file:

    set stream one_hour_restored
    set src /var/lib/market-data-node/one_hour.qsmsg
    import
    tip

    Refer to TimeBase Shell for more information.

Copying TimeBase Streams

To copy streams from one TimeBase database to another - just copy the stream folder from one location to another.

Steps:

  1. Stop both instances of the TimeBase services that you will be copying stream(s) from and to.
  2. Copy stream(s) to the new location (TimeBase files resides on tickdb folder for 4.3 engine or timebase folder for 5.0 engine).
  3. Restart services.

Distributed Tick Database (Enterprise)

This section describes the process of creating a Distributed Tick Database.

As mentioned in the Data Architecture, there are several reasons for distributing a tick database:

  • Spread data among multiple storage devices (not connected into a disk array) to increase the amount of data that can be stored.
  • Spread data among multiple storage devices (not connected into a disk array) to increase the data retrieval performance.
  • Overcome the inefficiencies of a specific file systems related to a large number of files being placed in a single folder. For example, it is undesirable to place more than a few thousand files into a single NTFS folder. NTFS performance begins to fall exponentially after folder size exceeds 4000 files. In this case, using multiple folders even on the same physical device is beneficial.

We use a round robin approach in selecting storage locations when a new stream is created.

  • Create a default tick database.
  • Create distributed folders outside the default configuration.
  • Create the tick database catalog file, dbcat.txt.

Streams will be stored using the first folder location defined within the catalog file. Then stream rotate through the remaining folders and end at the parent tick database folder that is a part of the home directory.

Steps: Must Be Edited

  1. Launch QuantServer Architect and create a new QuantServerHome folder to manage. This example assumes you are working with the default QuantServerHome location which is C:\Program Files\Deltix\QuantServerHome.

  2. Configure TimeBase service and apply changes. This process will generate the default tick database folder within the QuantServerHome directory.

  3. Stop the TimeBase service if it is running. Required if the service was configured as AUTO.

  4. Create the distributed folders.

  5. Navigate to the tickdb folder within the QuantServerHome directory that is being managed.

  6. Create the dbcat.txt catalog file within the tickdb folder.

  7. Modify the file and add absolute paths to the distributed folders. There are no restrictions as for folder names.

    # example
    D:\tickdb1
    E:\myDistributedTickdb
  8. Restart TimeBase service and launch TimeBase Administrator.

As you populate your database with new streams, you will notice the first stream will be created within your D:\tickdb1 folder, the second within your E:\myDistributedTickdb folder, and the third within your parent QuantServerHome\tickdb folder.


Migrate to 5.X Format

This instruction explains how to migrate TimeBase to 5.X storage format.

TimeBase used 4.3 (classic) storage format prior to 5.4 format release. The classic format has been battle tested for 10 years. However, extensive testing (including a 1+ year of production use) conducted on a new format gives us confidence to recommend it for all TimeBase clients.

Key 5.X format advantages:

  • 5-10 times less space is required to store data due to built-in SNAPPY compression.
  • More efficient support of streams with large number of instruments.
  • Ability to edit historical data (insert/modify/delete).

We offer a utility to convert streams to 5.X format.

Getting Ready

  1. Stop TimeBase and all dependent processes.
  2. Backup your streams. Utility preserves a copy of original data under QuantServerHome/tickdb/<streamKey> directory. However, for safety precautions it is recommended to create an additional backup for critical production sites.
  3. Make sure you have DELTIX_HOME environment variable set to your TimeBase/QuantServer installation. This variable is already defined if you use a Docker image - export DELTIX_HOME=/deltix/QuantServer

TBMIGRATOR Utility

# Shell CLI Migration Commands

-home <qshome> Path to QuantServer Home. Required.

-streams <stream1,..,streamN>
Comma-separated list of streams. Optional.
If omitted then all streams will be migrated.

-streamsRegexp <streamsRegexp>
Regular expression to lookup streams for migration.
Optional. If defined -streams arg will be ignored.

-compare If defined streams will be compared after migration.

-transient If defined transient streams will be migrated (created).

# Example of converting NYMEX and NYMEX_L1 streams:

$ /deltix/QuantServer/bin/tbmigrator.sh -home /deltix/QuantServerHome -streams NYMEX,NYMEX_L1
May 01, 2020 10:05:38 PM deltix.qsrv.hf.tickdb.tool.TBMigrator$TBStreamMigrator migrate
INFO: Migrate streams from "/deltix/QuantServerHome/tickdb" to "/deltix/QuantServerHome/timebase"
May 01, 2020 10:05:38 PM deltix.ramdisk.RAMDisk createCached
INFO: Initializing RAMDisk. Data cache size = 100MB.
1 May 22:05:39.061 INFO [main] [Axa_Output_AEP_Raw_2_3_4] open: checking data consistency...
1 May 22:05:39.105 INFO [main] [Axa_Output_ExecByMarket_Raw_2_3_4] open: checking data consistency...
1 May 22:05:39.111 INFO [main] [Axa_Output_QWAP_Raw_2_3_4] open: checking data consistency...
1 May 22:05:39.131 INFO [main] [Axa_Output_Tags_Raw_2_3_4] open: checking data consistency...
1 May 22:05:39.137 INFO [main] [Axa_Output_VWAP_Raw_2_3_4] open: checking data consistency...
May 01, 2020 10:05:39 PM deltix.ramdisk.RAMDisk createCached
INFO: Initializing RAMDisk. Data cache size = 100MB.
May 01, 2020 10:05:39 PM deltix.qsrv.hf.tickdb.tool.TBMigrator$TBStreamMigrator migrate
INFO: Start [NYMEX] stream migration...
May 01, 2020 10:06:24 PM deltix.qsrv.hf.tickdb.tool.TBMigrator$TBStreamMigrator migrate
INFO: Start [NYMEX_L1] stream migration...
1 May 22:06:24.709 INFO [main] Replication completed.
1 May 22:06:47.245 INFO [main] Replication completed.
1 May 22:06:47.359 INFO [main] Writers successfully stopped, files in queue = 0
1 May 22:06:47.360 INFO [main] Writers successfully stopped, files in queue = 0
May 01, 2020 10:06:47 PM deltix.qsrv.hf.tickdb.tool.TBMigrator$TBStreamMigrator migrate
INFO: Streams migration successfully finished.

All command line tools provided with TimeBase support -help argument.

You can convert all streams (default) or limit the tool to specific streams using -streams or -streamsRegexp arguments - see attached example. Streams in 5.X format are located in QuantServerHome/timebase.

Let's compare size of directories in format 4.X /tickdb and new 5.X /timebase format:

[centos@ip-10-0-0-54 QuantServerHome]$ du tickdb/__nymex_ -sh
3.9G tickdb/__nymex_
[centos@ip-10-0-0-54 QuantServerHome]$ du timebase/__nymex_ -sh
586M timebase/__nymex_
tip

Converted to 5.X format, a stream takes 6.65x less space!

info

Refer to TimeBase Shell for additional information about available commands.


Receive a Copy of Last Known Values

TimeBase provides an ability to receive a copy of last known values for new stream subscribers. Streams that use this feature are called unique streams.

Example

Consider a case when somebody publishes portfolio positions into TimeBase. Every time a position changes, we send a new message into TimeBase. For rarely traded instruments, there may be days or months between changes. If we want to know positions in all instruments, it may be very demanding to process the entire stream backwards until we see the last message in each instrument. Unique streams values solve this problem by keeping a cache of last position for each instrument in the TimeBase server cache. Any new subscriber will receive a snapshot of all known positions immediately before any live messages.

Message Identity

By default, messages are identified and cached using a Symbol and Instrument Type. However, in some cases, unique streams have extended identity. You may extend the default message identity with one or more additional custom fields.

tip

For keeping per-account positions in the example above, the message identity may include a custom field Account.

  • Additional fields should be annotated with @PrimaryKey (from Deltix.timebase.api package).
  • For Luminary stream schemas use [PrimaryKey] decorator (from Deltix.Timebase.API namespace).

Producer API

Special action or API changes are not required to produce data into a Unique stream.

Consumer API

Special action is not required from a consumer to subscribe to Unique streams. Just be aware that the initial set of messages you receive may not be live data but rather a snapshot of last messages. You may inspect timestamps to distinguish live data and historical snapshot. A more advanced technique may use Real Time notification marker.