DHT Sync Lag
Measure lag time between an agent publishing data and other peers being able to see it. This scenario has two roles:
write: A simple job that just creates entries with a timestamp field. Those entries are linked to a known base hash. For each write, the metricws.custom.dht_sync_sent_countis incremented.record_lag: A job that repeatedly queries for links from the known base hash. It keeps track of records that it has seen and when a new record is found, and calculates the time difference between the timestamp of the new record and the current time. That time difference is then recorded as a custom metric calledwt.custom.dht_sync_lag.
After each behaviour loop the metric ws.custom.dht_sync_recv_count is incremented.
Started
Fri, 20 Mar 2026 19:07:14 UTC
Peer count
200
Peer count at end
1988
Behaviours
-
record_lag(1 agent) -
write(1 agent)
Holochain version
0.6.1-rc.3
Wind Tunnel version
0.6.0
Run ID
dht_sync_lag_23357736246
Create rate
The average number of created records per agent.
Mean
mean
66.39/s
Max
Highest per-partition mean rate
max
219/s
Min
Lowest per-partition mean rate
min
12.96/s
Sync lag timing
The average time between when a record was created and when it is first seen by an agent.
Mean
mean
107.974788s
(SD =
27.547397s)
Max
Highest per-partition mean latency
max mean
233.138305s
Min
Lowest per-partition mean latency
min mean
33.940831s
Sync lag rate
The average number of created records discovered per agent.
Mean
mean
108.6/s
Max
Highest per-partition mean rate
max
863/s
Min
Lowest per-partition mean rate
min
0/s
Error count
The number of errors encountered during the scenario.
16
Host Metrics
User CPU usage
CPU usage by user space
33.39%
Network receive rate (primary)
Rate of bytes received on primary network interface
p5 2.42 KiB/s
<
mean 284.16 KiB/s
(SD = 487.97 KiB/s)
<
p95864.98 KiB/s
Network send rate (primary)
Rate of bytes sent on primary network interface
p5 2.44 KiB/s
<
mean 237.58 KiB/s
(SD = 473.24 KiB/s)
<
p95805.75 KiB/s
Total bytes received
Total bytes received on primary network interface
count
17.50 GiB
mean
38.12 MiB/s
Total bytes sent
Total bytes sent on primary network interface
count
14.63 GiB
mean
31.87 MiB/s
CPU spike anomaly
CPU spike anomaly was detected
Detected
Warning CPU p99 reached 91.2%
Memory leak anomaly
Memory leak anomaly was detected
Detected
Warning Memory growing at 461.66 MB/s
Disk full anomaly
Disk full anomaly was detected
NotDetected
Swap thrashing anomaly
Swap thrashing anomaly was detected
Detected
Critical Heavy swap usage (23.2% swap used)
System overload anomaly
System overload anomaly was detected
NotDetected
Additional Host Metrics
Additional Host Metrics
CPU usage
Total CPU usage and kernel CPU usage
Total
p5
9.73%
<
mean
47.76%
(SD =
22.41%)
<
p95
85.16%
System
14.37%
CPU percentiles
CPU usage percentiles
p50
46.67%
p95
85.16%
p99
91.22%
CPU usage above 80%
Number of hosts above 80% CPU and mean time spent above threshold for those hosts
count
82 hosts
mean time
0.22s
Memory used percentage
Percentage of memory used
p5
6.52%
<
mean
11.58%
(SD =
7.73%)
<
p95
27.99%
Memory available percentage
Percentage of available memory
p5
72.01%
<
mean
88.42%
(SD =
7.73%)
<
p95
93.48%
Max host memory used
Maximum memory usage percentage across all hosts
max
75.50%
Max host swap used percentage
Maximum swap space usage percentage across all hosts
max
23.20%
Memory growth rate
Rate of memory growth over time
growth
461.66MB/s
Disk read throughput
Disk read throughput in MB/s
0.77MB/s
Disk write throughput
Disk write throughput in MB/s
945.35MB/s
Disk space utilization risk
Number of hosts nearing disk space capacity by mount point
Mount Point
/
0/172hosts
Mount Point
/efi-boot
0/18hosts
Mount Point
/etc/hostname
0/22hosts
Mount Point
/etc/hosts
0/22hosts
Mount Point
/etc/resolv.conf
0/22hosts
Mount Point
/nix/store
0/21hosts
System load average
System load averages over 1, 5, and 15 minutes. This is an unnormalised value not divided by number of CPUs, so it is only meaningful if all machines have the same core count.
1 min
2.33
5 min
1.13
15 min
0.46
CPU overloaded hosts
Percentage of hosts that experienced CPU overload
0.00%
CPU pressure
CPU pressure over 10, 60, and 300 second averages
10 second average
p5
0%
<
mean
26.9904%
(SD =
18.8237%)
<
p95
55.03%
60 second average
23.2024%
300 second average
11.3144%
Memory pressure some
Memory pressure (some tasks stalled) over 10, 60, and 300 second averages
10 second average
p5
0%
≤
mean
0%
(SD =
0%)
≤
p95
0%
60 second average
0.0000%
300 second average
0.0000%
Memory pressure full
Memory pressure (all tasks stalled) over 10, 60, and 300 second averages
10 second average
p5
0%
≤
mean
0%
(SD =
0%)
≤
p95
0%
60 second average
0.0000%
300 second average
0.0000%
I/O pressure some
I/O pressure (some tasks stalled) over 10, 60 and 300 second averages
10 second average
p5
0.18%
<
mean
8.9984%
(SD =
9.4827%)
<
p95
32.34%
60 second average
7.7846%
300 second average
3.9267%
I/O pressure full
I/O pressure (all tasks stalled) over 10, 60 and 300 second averages
10 second average
p5
0.02%
<
mean
5.6388%
(SD =
8.5761%)
<
p95
27.6%
60 second average
4.9163%
300 second average
2.5581%
Holochain process CPU usage
CPU usage by Holochain process
p5
9.8%
<
mean
46.43%
(SD =
20.83%)
<
p95
81.08%
Holochain process memory (PSS)
Proportional Set Size memory of Holochain process
p5 214.62 KiB
<
mean 284.24 KiB
(SD = 51.83 KiB)
<
p95355.83 KiB
Holochain process threads
Number of threads in Holochain process
p5
13 threads
<
mean
28.99 threads
(SD =
14.66 threads)
<
p95
51 threads
Holochain process file descriptors
Number of file descriptors used by Holochain process
p5
61 file descriptors
<
mean
76.26 file descriptors
(SD =
16.45 file descriptors)
<
p95
96 file descriptors