【Abnormal summary】 Apache SeaTunnel cluster split-brain configuration optimization method

Apache SeaTunnel
5 min read4 days ago

--

Cluster Configuration

Issues

Since April, there have been three occurrences of cluster split-brain phenomena, all involving a specific node experiencing split-brain or automatically shutting down.

Master Node

The following core logs were observed:

Slow Operation logs printed by the Hazelcast monitoring thread.

Hazelcast heartbeat timeout of 60s showed node 198 leaving the cluster.

Node 198 (Worker)

No heartbeat detected from Hazelcast cluster nodes, with a timeout exceeding 60000ms.

Attempt to reconnect to the cluster

Status queries and job submission requests to the node were unresponsive, leading to a cluster-wide deadlock. Our node health check interfaces were also unresponsive. This resulted in service unavailability during peak hours, prompting us to quickly restart the cluster after detecting split-brain issues.

After parameter adjustments, nodes sometimes automatically shut down.

Problem Analysis

The potential causes for Hazelcast cluster split-brain issues are as follows:

  • Inconsistent NTP across ECS systems in the cluster.
  • Network jitter in the ECS where the cluster is located, leading to inaccessible services.
  • Full GC in SeaTunnel causing JVM stalling, leading to cluster formation failure.

We verified with the operations team that there were no network issues, and the Alibaba Cloud NTP service confirmed that all three servers were synchronized.

The third issue was investigated by checking the last Hazelcast health check logs on node 198 before the anomaly. The cluster time was -100 milliseconds, which seemed negligible.

We then added JVM GC log parameters to monitor full GC times, which reached up to 27s, easily leading to Hazelcast’s 60s heartbeat timeout.

Additionally, a specific CK table with 1.4 billion records frequently triggered FullGC after running for a certain period.

Solutions

Increase ST Cluster Heartbeat Interval

Hazelcast’s cluster failure detector identifies members that are unreachable or crashed. However, according to the famous FLP result, it’s impossible to distinguish between a crashed member and a slow member in asynchronous systems. The solution is to use an unreliable failure detector. Hazelcast has built-in detectors: Deadline Failure Detector and Phi Accrual Failure Detector.

By default, Hazelcast uses the Deadline Failure Detector.

Deadline Failure Detector

Uses absolute timeout for missed heartbeats. Members are considered unreachable after the timeout.

Configuration:

hazelcast:
properties:
hazelcast.heartbeat.failuredetector.type: deadline
hazelcast.heartbeat.interval.seconds: 5
hazelcast.max.no.heartbeat.seconds: 120

Phi-accrual Failure Detector

Tracks the intervals between heartbeats in a sliding time window, calculates suspicion level (Phi) based on these samples.

Configuration:

hazelcast:
properties:
hazelcast.heartbeat.failuredetector.type: phi-accrual
hazelcast.heartbeat.interval.seconds: 1
hazelcast.max.no.heartbeat.seconds: 60
hazelcast.heartbeat.phiaccrual.failuredetector.threshold: 10
hazelcast.heartbeat.phiaccrual.failuredetector.sample.size: 200
hazelcast.heartbeat.phiaccrual.failuredetector.min.std.dev.millis: 100

References:

To ensure accuracy, we used the community-recommended phi-accrual detector with a 180s timeout in hazelcast.yml:

hazelcast:
properties:
hazelcast.heartbeat.failuredetector.type: phi-accrual
hazelcast.heartbeat.interval.seconds: 1
hazelcast.max.no.heartbeat.seconds: 180
hazelcast.heartbeat.phiaccrual.failuredetector.threshold: 10
hazelcast.heartbeat.phiaccrual.failuredetector.sample.size: 200
hazelcast.heartbeat.phiaccrual.failuredetector.min.std.dev.millis: 100

Optimize GC Settings

SeaTunnel uses the G1 garbage collector by default. Larger memory settings can cause frequent FullGC if YoungGC/MixedGC does not reclaim enough resources. Our goal is to maximize the memory reclaimed by YoungGC/MixedGC threads.

Initial settings:

-Xms32g
-Xmx32g
-XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=/tmp/seatunnel/dump/zeta-server
-XX:MaxMetaspaceSize=8g
-XX:+UseG1GC
-XX:+PrintGCDetails
-Xloggc:/alidata1/za-seatunnel/logs/gc.log
-XX:+PrintGCDateStamps

We increased the GC pause time:

-XX:MaxGCPauseMillis=5000

First optimization attempt:

-Xms32g
-Xmx32g
-XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=/tmp/seatunnel/dump/zeta-server
-XX:MaxMetaspaceSize=8g
-XX:+UseG1GC
-XX:+PrintGCDetails
-Xloggc:/alidata1/za-seatunnel/logs/gc.log
-XX:+PrintGCDateStamps
-XX:MaxGCPauseMillis=5000

MixedGC logs showed that the pause times were within expectations but did not eliminate FullGC, which lasted around 20s.

To further optimize, we increased the old generation memory and adjusted G1 garbage collector parameters:

-Xms42g
-Xmx42g
-XX:GCTimeRatio=4
-XX:G1ReservePercent=15
-XX:G1HeapRegionSize=32M

Heap memory was increased from 32G to 42G, effectively increasing the old generation limit, which should reduce FullGC frequency.

Second optimization attempt

-Xms42g
-Xmx42g
-XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=/tmp/seatunnel/dump/zeta-server
-XX:MaxMetaspaceSize=8g
-XX:+UseG1GC
-XX:+PrintGCDetails
-Xloggc:/alidata1/za-seatunnel/logs/gc.log
-XX:+PrintGCDateStamps
-XX:MaxGCPauseMillis=5000
-XX:GCTimeRatio=4
-XX:G1ReservePercent=15
-XX:G1HeapRegionSize=32M

While the number of FullGCs decreased, they did not disappear entirely. We noticed that old generation data was not being properly cleaned during MixedGC.

Final optimization attempt:

-Xms42g
-Xmx42g
-XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=/tmp/seatunnel/dump/zeta-server
-XX:MaxMetaspaceSize=8g
-XX:+UseG1GC
-XX:+PrintGCDetails
-Xloggc:/alidata1/za-seatunnel/logs/gc.log
-XX:+PrintGCDateStamps
-XX:MaxGCPauseMillis=5000
-XX:InitiatingHeapOccupancyPercent=50
-XX:+UseStringDeduplication
-XX:GCTimeRatio=4
-XX:G1ReservePercent=15
-XX:ConcGCThreads=12
-XX:G1HeapRegionSize=32M

References:

Optimization Results

Since the configuration changes on April 26, there have been no further split-brain issues, and service availability has been restored.

Since JVM parameter adjustments on April 30, we achieved zero FullGCs during the May Day holiday, with no health check interface issues.

While this optimization may have slightly impacted application thread throughput, it ensured cluster stability, enabling widespread internal deployment of Zeta.

--

--

Apache SeaTunnel

The next-generation high-performance, distributed, massive data integration tool.