Skip to main content

Deployment

Setup

To setup TimeBase Cluster:

  1. Choose a Cluster layout by deciding on the following attributes:
    • How to identify cluster nodes (static IP address vs. fixed hostname)
    • Desired cluster size (minimum size = 3)
    • Cluster member naming pattern
      For more information, see TimeBase.cluster.memberKey in the Settings section.
  2. On each cluster machine, setup or copy standalone TimeBase (versions 7+).
    You can use regular TimeBase Docker images with versions 7 and onward.
  3. For each TimeBase instance, apply cluster-specific changes to admin.properties.
    Note that some settings are instance specific, making it is impossible to copy the same admin.properties to each cluster node.

Settings

The following table lists the Cluster-specific settings in admin.properties:

NameRequired/
Optional
Default ValueDescription
TimeBase.hostRequired (if not IP is not static)Host (static IP or hostname) of the cluster node that will be used for client connections. Used in combination with Timebase.port.
TimeBase.cluster.enabledFalse(boolean) Enables cluster mode. Must be true to enable cluster functionality.
TimeBase.cluster.memberKeyRequiredLogical unique ID of cluster node. Must be unique on each cluster node. Cannot be changed after the initial setup.
TimeBase.cluster.memberAddressRequiredHost (static IP or hostname) and port to a cluster node for cluster-specific internal protocol. The port must be different to Timebase.port. Must be unique on each cluster node.
TimeBase.cluster.allServersRequiredComma separated list of hosts and ports for all known cluster nodes. The the value from TimeBase.cluster.memberAddress must be in the list. Data from that list is used only when cluster gets started for the first time.
TimeBase.cluster.groupRequiredCluster group id. Must be same for all cluster nodes.
TimeBase.cluster.stream.replicationFactorOptional2 for multi-node clusterA positive integer number determines the default replication factor for all new streams nodes a stream is written to. In the special case of 1, node cluster defaults to 1.
TimeBase.cluster.blockAllocatorTypeOptionalDEFAULTDetermines block allocation strategy. This option should not be changed in production.

Possible values:
- DEFAULT
- PREDICTABLE
- MEMBER_KEY

The block allocation strategy types for TimeBase.cluster.blockAllocatorType are as follows.

  • DEFAULT: Main allocator implementation. Should be used in all production setups.
  • PREDICTABLE: Similar to the default allocator but has RNG turned off. More predictable and can be used for some tests.
  • MEMBER_KEY: Uses a trivial allocator that only considers memberKey for allocation priority. Suitable for tests only.
caution

After the initial setup, all cluster-specific settings, excluding TimeBase.cluster.stream.replicationFactor, cannot be changed.

Examples

This section provides examples of various admin.properties setups.

A configuration with a static IP address:

TimeBase.cluster.enabled=true
TimeBase.cluster.group=tbcluster
TimeBase.cluster.memberKey=member1
TimeBase.cluster.memberAddress=127.0.0.1:7081
TimeBase.cluster.allServers=127.0.0.1:7081,127.0.0.1:7082,127.0.0.1:7083

A configuration with a static hostname:

TimeBase.cluster.enabled=true
TimeBase.cluster.group=tbcluster
TimeBase.cluster.memberKey=member1
TimeBase.cluster.memberAddress=member1.cluster1.localnet:7080
TimeBase.cluster.allServers=member1.cluster1.localnet:7080,member2.cluster1.localnet:7080,member3.cluster1.localnet:7080

An example with additional options:

TimeBase.cluster.stream.replicationFactor=2
TimeBase.cluster.blockAllocatorType=DEFAULT

Connecting to the Cluster

To connect to the TimeBase cluster, specify the connection url according to the following:

scheme "://" [[user:password]"@"] host1 [":" port1] | host2 [":" port2] | ... | hostN [":" portN] 

The scheme can be dxctick or dsctick (SSL communication).

To create a TimeBase client in Java Code, use the following code fragment:

DXTickDB client = TickDBFactory.createFromUrl("dxctick://host1:9000|host2:9000|host3:9000");

Aggregator

To connect to a cluster, update the TimeBase aggregator settings to:

TimeBase.host=<any_cluster_node_hostname>
TimeBase.port=8011
TimeBase.cluster.enabled=true

Maintenance

Adding New Nodes (Scale-up)

To add new nodes to cluster, prepare the new servers with the same TimeBase configuration as the existing cluster members, but with the following changes:

  1. For each new member, set a unique value for TimeBase.cluster.memberKey.
  2. Modify TimeBase.cluster.memberAddress to contain the full new member list.
    All old and new cluster members members must be in the list.
  3. Start all the new members.
  4. Unless you have the TimeBase.cluster.autoRegisterAsPeer option set, for each new MEMBER_ADDRESS, execute the following REST request to any old live cluster members: /tb/cluster/add?memberPeerId=MEMBER_ADDRESS.

To ensure that the new members appear in the Peer list, check the cluster web stats page.

Removing Nodes (Scale-down)

This functionality will be available with TimeBase version 7.3.

You can remove nodes from cluster under the following conditions:

  • All streams have a replication factor of 2 or more.
  • Temporary data unavailability is acceptable. The duration of the period when data is unavailable is proportional to the time that is needed to copy all the data from the servers that will be removed.
  • There is enough disk space on other cluster members to store all the data from the members that are going to be removed.
  • Enough cluster members will remain to keep up with the defined replication factor for all streams.
  • At least 3 members will remain in the cluster.

If all of the conditions are met, you can use the following steps to remove nodes:

  1. Enable a "Maintenance State" on all the servers you want to remove by sending the following REST request to any live cluster member: /tb/cluster/turn-on/maintenance?memberKey=MEMBER_KEY&duration=86400000&turnOffOnNodeRestart=false&cause=Decomission&overridesExisting=true
  2. On each server you want to remove, execute the following:
    1. Turn off the server.
    2. Wait for failover procedures to complete. All offline replications have to stop.
      You can check the status on the Web UI.

Recommend Configuration

The minimum cluster size is 3 servers. The recommended cluster size depends on the number of expected active connections and throughput.

It's recommended to have an odd cluster size (3,5,7...) because with an even numbered cluster member (4,6,8...), there is no additional gain on the number of failures that the cluster can tolerate. However, this is not important for clusters larger than 8 servers.

Throughput

Throughput = (Single Node Throughput Node Count) / (Replication Factor K), where K can be estimated as 2.

For example, if you want to get 2x the throughput of a standalone TB server and you plan to use RF=2, the cluster size you need is 8.

Replication Factor

The Replication Factor (RF) should be at least 2 and a maximum of 4.

The advantage of having a higher replication factor is that it provides resilience for your system. If the replication factor is N, up to N-1 nodes may fail without impacting availability.

The disadvantages of having a higher replication factor include:

  • Higher latency experienced by the producers, as the data needs to be replicated to all the replica brokers before an ack.
  • More disk space required on your system because each message is replicated N times.