Skip to main content


General Questions

What is the purpose of Custer?

  • Increased data capacity: Cluster allows storing larger volumes of data in TimeBase than would be possible on a single machine.
  • Higher data ingestion rate: Cluster facilitates higher data ingestion rates, as it is not limited by the throughput of a single machine.
  • Fault tolerance Cluster can continue to operate as long as there are at least half of nodes alive. It can also guarantee that data for a particular stream remains intact, provided that the number of failed machines in the cluster remains below that steam substantially replication factor.

What does a data storing guarantee mean?

Generally, for a stream with a Replication Factor (RF), we can guarantee that data is preserved as long as:

  • The data has reached the last node in the replication chain meaning it is available for reading.
  • No more than RF-1 cluster nodes have failed. For instance, with a Replication Factor = 3, it's acceptable to lose up to 2 nodes.
  • Cluster quorum is reachable, meaning, there are at least (N/2 + 1) live nodes that can connect to each other.

What are Cluster's limitations?

A breakdown of Cluster's limitations is available on the Overview page.

Is Cluster available in the Community Edition?

No. At the moment, Cluster is a feature of TimeBase's Enterprise Edition.

How is it different from the standalone version of TimeBase?

The TimeBase Cluster API is different in the following ways:

  • Cluster has a number of limitations.
  • Cluster uses dxctick instead of the dxtick protocol prefix in the connection string URL, and dsctick instead of dstick. For example:
    • dxctick://host1:9000
    • dxctick://host1:9000|host2:9000|host3:9000
    • *dsctick://user:password@host1:9000|host2:9000|host3:9000 (SSL enabled)
  • With a replication factor greater than 1, Cluster requires proportionally more space (2x more for RF=2).
  • When opening the local files of a particular server, the TickDB shell only sees a subset of the data.
  • CPU: Modern processor with at least 4 cores.
  • Memory: Minimum memory of 8 GB. In the case of a high load and many active readers/writers, 64 GB RAM is a decent choice.
  • Disk: Use multiple drives to maximize throughput. For fault tolerance, RAID is recommended (RAID 10 is preferred). Avoid network-attached storage (NAS) because this is a single point of failure. Also, latencies will degrade.
  • Network: A fast and reliable network is an essential performance component in a distributed system. Modern data-center networking (1 GbE, 10 GbE) is recommend.

How do I get notified if something goes wrong?

To get notified in the event of an error or failure, setup Monitoring.

TimeBase logs contain detailed information..

Is there an admin app to manage Cluster?

There is no separate app. You can perform basic cluster management using the built-in TimeBase web interface.


What is Cluster's architecture?

Please refer to the Architecture page.

How is data distributed?

Data for each stream is split into partitions based on a designated "key," with each partition equivalent to a "space" in the standalone TimeBase API. These partitions are further divided into "blocks" based on time intervals, with TimeBase attempting to split the blocks to have similar binary sizes on the disk.

Each block is stored on a set of cluster nodes, with the actual number of nodes determined by the Replication Factor (RF). In a typical scenario, all blocks for a stream, and even individual partitions, are evenly distributed across all cluster nodes, maintaining a balanced data distribution.

For more details, refer to the Architecture page.

How do nodes communicate with each other?

Most data replication uses the regular TimeBase protocol.

Cluster-level metadata distribution and operation execution is performed using Bolt protocol.

Can I choose a region for a specific node's location?

While it's possible deploy TimeBase Cluster nodes in different regions, it's likely to provide a processing latency that is higer than most common TimeBase usage scenarios.

Multi-region deployment is not recommended.

What is quorum in Cluster?

In the context of TimeBase Cluster, quorum refers to a state in the cluster when at least N/2 (rounded up) cluster members are online and able to communicate with each other (N = number of nodes in the cluster). Quorum is necessary for cluster to work. Without quorum, the cluster is effectively unavailable.

Quorum also indirectly means that one of the cluster members is the elected leader, responsible for coordinating any operations that can be applied to the cluster.

In the case of a temporary quorum loss, individual cluster members can still perform operations like accepting new data from loaders or providing locally available data to cursors.

Read/write balancing

Cluster tries to maintain load balance by allocating new blocks to machines with anticipated lower loads, ideally distributing load evenly across all machines. In practice, load fluctuation between machines is likely, especially if streams or partitions have different message rates or sizes. We plan to address these fluctuations in future developments.

Some additional points:

  • The outbound load for data reading is indirectly distributed according to the way data is written.
  • Live data can only be read from a specific cluster node.

If 100 clients read data from different streams or partitions, they are likely to read from different servers. Yet, if 100 clients read data from the same stream and partition, they have to connect to the same server. This can be a bottleneck.

Cluster data replication

There are two types of replication:

  • Live data replication: Used when the loader writes data into a partition with a replication factor = 2 or more. Replication starts when the loader starts, and stops when the loader is closed.
  • Block recovery replication: Used when a block is damaged, meaning it is missing or has incomplete data. Replication starts as soon as the cluster detects data inconsistency. This typically occurs when a cluster restarts and a new leader is elected or a cluster members goes offline.

For more information, refer to the Failover & Replication page.

How does live data reading work?

Live data reading occurs as follows:

  1. Writing: The TimeBase loader writes data to the first server in a sequence of servers assigned to a signle data block.
  2. Replicating: The first server in the sequence replicates data to the second server in the sequence, and so on. Each group of servers assigned to a data block retains a fixed, sequential order.
  3. Reading: Clients read data from the last server in the sequence.

For example, say you have servers A, B, and C assigned to a data block with a replication factor = 3.

Data reading occurs in the following way:

  1. The client loader writes data to server A.
  2. Server B replicates data from server A.
  3. Server C replicates data from server B.
  4. All live readers consume data from server C.

For more information, refer to the Data Flow Example.

What's the Raft algorithm?

Raft is a consensus algorithm that performs the following functions:

  • Elects the cluster leader.
  • Replicates the event log (operation journal) between cluster members.
    TimeBase uses this log to store all metadata and uses the elected leader to perform all metadata modifications. For more information, visit the Raft algorithm Wikipedia article.

What is Split Brain?

Split Brain is the name of a situation when members of a distributed systems lose network connectivity, get partitioned into two or more segments, and members of a segment start to work independently from the members in different network segments. For most systems, this situation is unacceptable as it leads to the loss of a singular source of truth and essentially creates a data corruption issue.

In Cluster, a "Split Brain" situation is not possible because Raft quorum requires that more than half of Cluster's members are online and reachable.

For more information, refer to the Split Brain Wikipedia article.


What's the write/read performance like?

Generally, single loader throughput is similar to the loader throughput of standalone TimeBase.

If there is no activity related to write operations in other cluster nodes, single cluster node throughput should be similar to the overall throughput of the TimeBase instance.

Overall cluster throughput is proportional to (N/RF), where N is the total node count and RF is the "average" replication factor for cluster nodes.

What's the write/read latency?

For streams with a replication factor (RF) = 1, the read/write latency is similar to the latency of standalone TimeBase.

For streams with RF ≥ 2, mean latency increases proportionally to the number of nodes. However, this number should be not higher than (RF x Standalone TimeBase Mean Latency).