DHT Sync Lag
Measure lag time between an agent publishing data and other peers being able to see it. This scenario has two roles:
write: A simple job that just creates entries with a timestamp field. Those entries are linked to a known base hash. For each write, the metricws.custom.dht_sync_sent_countis incremented.record_lag: A job that repeatedly queries for links from the known base hash. It keeps track of records that it has seen and when a new record is found, and calculates the time difference between the timestamp of the new record and the current time. That time difference is then recorded as a custom metric calledwt.custom.dht_sync_lag.
After each behaviour loop the metric ws.custom.dht_sync_recv_count is incremented.
Started
Sat, 14 Mar 2026 06:59:13 UTC
Peer count
10
Peer count at end
10
Behaviours
-
record_lag(1 agent) -
write(1 agent)
Holochain version
0.6.1-rc.0
Wind Tunnel version
0.6.0
Run ID
dht_sync_lag_23082714356
Create rate
The average number of created records per agent.
Mean
mean
68.29/s
Max
Highest per-partition mean rate
max
98.14/s
Min
Lowest per-partition mean rate
min
49.54/s
Sync lag timing
The average time between when a record was created and when it is first seen by an agent.
Mean
mean
49.180095s
(SD =
19.917542s)
Max
Highest per-partition mean latency
max mean
67.483935s
Min
Lowest per-partition mean latency
min mean
20.381659s
Sync lag rate
The average number of created records discovered per agent.
Mean
mean
157.93/s
Max
Highest per-partition mean rate
max
197.67/s
Min
Lowest per-partition mean rate
min
113.33/s
Error count
The number of errors encountered during the scenario.
11
Cascade duration
The time taken to execute a cascade query.
mean
0.023815s
(SD =
0.046031s)
p50
0.001217s
p95
0.12528s
p99
0.18405s
WASM usage
The metered usage of a wasm ribosome.
Total
total
76970over 321.317163413s
mean rate
3.8586815402e+08/s
std
1.872640324885e+10/s
p5
0/s
p95
2.7703409676e+08/s
peak
2.19326756116811e+12/s
zome=timed,fn=created_timed_entry
total
0over 313.295390352s
Not enough time data to show a trend.
zome=timed,fn=get_timed_entries_local
total
76207274over 274.120669911s
mean rate
4.1654151889e+08/s
std
7.59086312238e+09/s
p5
0/s
p95
1.19542735432e+09/s
peak
5.1841497710922e+11/s
zome=timed_integrity,fn=entry_defs
total
0over 321.316110899s
Not enough time data to show a trend.
post_commit durationThe time spent executing a post commit.
mean
0.000938s
(SD =
0.001298s)
p50
0.000542s
p95
0.002977s
p99
0.006701s
Publish DHT ops workflow duration
The time spent running the publish workflow.
mean
0.11s
(SD =
0.167228s)
p50
0.082401s
p95
0.27198s
p99
0.417924s
Integrate DHT ops workflow duration
The time spent running the integration workflow.
mean
0.023432s
(SD =
0.419029s)
p50
0.010247s
p95
0.061976s
p99
0.124778s
Countersigning workflow duration
Duration
mean
0.003259s
(SD =
0.001013s)
p50
0.003169s
p95
0.004908s
p99
0.004908s
App validation workflow duration
The time spent running the app validation workflow.
mean
4.985612s
(SD =
11.185382s)
p50
0.081694s
p95
22.034547s
p99
61.232978s
System validation workflow duration
The time spent running the sys validation workflow.
mean
0.144412s
(SD =
1.19718s)
p50
0.040476s
p95
0.218038s
p99
0.408375s
Validation receipt workflow duration
The time spent running the validation receipt workflow.
mean
2.529088s
(SD =
17.959068s)
p50
0.003379s
p95
2.116123s
p99
87.4164s
Authored DB
The utilization of connections in the authored database connection pool.
Utilisation
p5
0%
<
mean
0.13%
(SD =
0.08%)
<
p95
0.22%
Use time
mean
0.012144s
(SD =
0.033654s)
p50
0.000319s
p95
0.088692s
p99
0.154484s
Conductor DB
The utilization of connections in the conductor database connection pool.
Utilisation
p5
0%
≤
mean
0%
(SD =
0%)
≤
p95
0%
Use time
mean
0.000181s
(SD =
0.000449s)
p50
0.000121s
p95
0.000393s
p99
0.001354s
DHT DB
The utilization of connections in the DHT database connection pool.
Utilisation
p5
0.11%
<
mean
0.25%
(SD =
0.12%)
<
p95
0.44%
Use time
mean
0.00244s
(SD =
0.013999s)
p50
0.000286s
p95
0.004589s
p99
0.067621s
Request roundtrip duration: 'get'
The time spent sending a get request and awaiting its response
mean
0.27903s
(SD =
3.284927s)
p50
0.105072s
p95
0.201207s
p99
0.465793s
Request roundtrip duration: 'send_validation_receipts'
The time spent sending a send_validation_receipts request and awaiting its response
mean
0.511525s
(SD =
2.499831s)
p50
0.256441s
p95
0.938796s
p99
3.575295s
Handle incoming response duration
The time spent handling an incoming response message of any type
mean
2.3e-05s
(SD =
6.5e-05s)
p50
1.1e-05s
p95
6.8e-05s
p99
0.000243s
Handle incoming request duration: 'get'
The time spent handling an incoming get request
mean
0.00222s
(SD =
0.006274s)
p50
0.001003s
p95
0.006725s
p99
0.023557s
Handle incoming request duration: 'send_validation_receipts'
The time spent handling an incoming send_validation_receipts request
mean
0.408962s
(SD =
2.86009s)
p50
0.126018s
p95
0.825911s
p99
3.518394s
Host Metrics
User CPU usage
CPU usage by user space
49.36%
Network receive rate (primary)
Rate of bytes received on primary network interface
p5 219 B/s
<
mean 178.05 KiB/s
(SD = 350.22 KiB/s)
<
p95322.11 KiB/s
Network send rate (primary)
Rate of bytes sent on primary network interface
p5 335 B/s
<
mean 166.05 KiB/s
(SD = 178.81 KiB/s)
<
p95565.32 KiB/s
Total bytes received
Total bytes received on primary network interface
count
505.98 MiB
mean
1.63 MiB/s
Total bytes sent
Total bytes sent on primary network interface
count
471.87 MiB
mean
1.52 MiB/s
CPU spike anomaly
CPU spike anomaly was detected
Detected
Warning CPU p99 reached 97.6%
Memory leak anomaly
Memory leak anomaly was detected
Detected
Warning Memory growing at 644.49 MB/s
Disk full anomaly
Disk full anomaly was detected
NotDetected
Swap thrashing anomaly
Swap thrashing anomaly was detected
NotDetected
System overload anomaly
System overload anomaly was detected
NotDetected
Additional Host Metrics
Additional Host Metrics
CPU usage
Total CPU usage and kernel CPU usage
Total
p5
57.64%
<
mean
74.99%
(SD =
11.34%)
<
p95
89.64%
System
25.63%
CPU percentiles
CPU usage percentiles
p50
75.64%
p95
89.64%
p99
97.60%
CPU usage above 80%
Number of hosts above 80% CPU and mean time spent above threshold for those hosts
count
9 hosts
mean time
0.42s
Memory used percentage
Percentage of memory used
p5
6.7%
<
mean
9.19%
(SD =
1.39%)
<
p95
13.22%
Memory available percentage
Percentage of available memory
p5
86.78%
<
mean
90.81%
(SD =
1.39%)
<
p95
93.3%
Max host memory used
Maximum memory usage percentage across all hosts
max
11.46%
Max host swap used percentage
Maximum swap space usage percentage across all hosts
max
0.00%
Memory growth rate
Rate of memory growth over time
growth
644.49MB/s
Disk read throughput
Disk read throughput in MB/s
0.00MB/s
Disk write throughput
Disk write throughput in MB/s
0.00MB/s
Disk space utilization risk
Number of hosts nearing disk space capacity by mount point
Mount Point
/
0/10hosts
System load average
System load averages over 1, 5, and 15 minutes. This is an unnormalised value not divided by number of CPUs, so it is only meaningful if all machines have the same core count.
1 min
2.54
5 min
1.19
15 min
0.48
CPU overloaded hosts
Percentage of hosts that experienced CPU overload
0.00%
CPU pressure
CPU pressure over 10, 60, and 300 second averages
10 second average
p5
6.09%
<
mean
43.3671%
(SD =
16.627%)
<
p95
71.31%
60 second average
35.0897%
300 second average
16.2656%
Memory pressure some
Memory pressure (some tasks stalled) over 10, 60, and 300 second averages
10 second average
p5
0%
≤
mean
0%
(SD =
0%)
≤
p95
0%
60 second average
0.0000%
300 second average
0.0000%
Memory pressure full
Memory pressure (all tasks stalled) over 10, 60, and 300 second averages
10 second average
p5
0%
≤
mean
0%
(SD =
0%)
≤
p95
0%
60 second average
0.0000%
300 second average
0.0000%
I/O pressure some
I/O pressure (some tasks stalled) over 10, 60 and 300 second averages
10 second average
p5
1.37%
<
mean
8.9189%
(SD =
5.3951%)
<
p95
18.33%
60 second average
7.4553%
300 second average
3.7360%
I/O pressure full
I/O pressure (all tasks stalled) over 10, 60 and 300 second averages
10 second average
p5
0.72%
<
mean
5.3697%
(SD =
4.1686%)
<
p95
12.32%
60 second average
4.5818%
300 second average
2.3907%
Holochain process CPU usage
CPU usage by Holochain process
p5
20.41%
<
mean
65.36%
(SD =
15.34%)
<
p95
82.91%
Holochain process memory (PSS)
Proportional Set Size memory of Holochain process
p5 131.71 KiB
<
mean 263.63 KiB
(SD = 55.42 KiB)
<
p95304.61 KiB
Holochain process threads
Number of threads in Holochain process
p5
14 threads
<
mean
37.59 threads
(SD =
20.09 threads)
<
p95
90 threads
Holochain process file descriptors
Number of file descriptors used by Holochain process
p5
49 file descriptors
<
mean
76.43 file descriptors
(SD =
15.49 file descriptors)
<
p95
93 file descriptors
Zero-Arc Create and Read
A zero-arc/full-arc mixed scenario with two types of zero-arc nodes -- ones that create data and ones that read data -- as well as full arc nodes to "relay" the data. The scenario has three roles:
zero_write: A zero-arc conductor that just creates entries with a timestamp field. Those entries are linked to a known base hash so thatzero_readnodes can retrieve them.zero_read: A zero-arc conductor that reads the entries created by the zero-arc node(s) and records the time lag between when the entry had been created and when it was first discovered.full: A full-arc conductor that is just here to serve entries to zero arc nodes.
Started
Sat, 14 Mar 2026 07:04:20 UTC
Peer count
9
Peer count at end
9
Behaviours
-
full(1 agent) -
zero_read(1 agent) -
zero_write(1 agent)
Holochain version
0.6.1-rc.0
Wind Tunnel version
0.6.0
Run ID
zero_arc_create_and_read_23082714356
Create rate
The number of timed entries created by the zero-arc write node(s) per second.
Mean
mean
134.73/s
Max
Highest per-partition mean rate
max
143.86/s
Min
Lowest per-partition mean rate
min
125.59/s
Fetch lag timing
For each entry, the time lag between when it was created and when a zero-arc read node fetched it from the network via the
get_timed_entries_network zome function.Mean
mean
56.021945s
(SD =
23.727778s)
Max
Highest per-partition mean latency
max mean
81.910686s
Min
Lowest per-partition mean latency
min mean
34.279318s
Fetch lag rate
The number of entries per second fetched by zero-arc read nodes.
Mean
mean
37.5/s
Max
Highest per-partition mean rate
max
61/s
Min
Lowest per-partition mean rate
min
15.5/s
Open connections
The number of currently open connections to other conductors.
full-arc
p5
6
<
mean
6.91
(SD =
0.4)
<
p95
7
zero-arc
p5
6
<
mean
6.66
(SD =
0.68)
<
p95
7
Error count
The number of errors accumulated across all nodes.
0
Request roundtrip duration: 'get'
The time spent sending a get request and awaiting its response
mean
0.084273s
(SD =
0.033865s)
p50
0.073635s
p95
0.137055s
p99
0.146506s
Request roundtrip duration: 'get_links'
The time spent sending a get_links request and awaiting its response
mean
0.091068s
(SD =
0.037536s)
p50
0.109346s
p95
0.137259s
p99
0.171174s
Request roundtrip duration: 'send_validation_receipts'
The time spent sending a send_validation_receipts request and awaiting its response
mean
0.195568s
(SD =
0.202771s)
p50
0.153533s
p95
0.457115s
p99
0.855292s
Handle incoming response duration
The time spent handling an incoming response message of any type
mean
5.3e-05s
(SD =
0.000173s)
p50
2.5e-05s
p95
0.000134s
p99
0.0006s
Handle incoming request duration: 'get'
The time spent handling an incoming get request
mean
0.001411s
(SD =
0.003063s)
p50
0.000631s
p95
0.005402s
p99
0.01289s
Handle incoming request duration: 'get_links'
The time spent handling an incoming get_links request
mean
0.001258s
(SD =
0.004968s)
p50
0.000609s
p95
0.00239s
p99
0.012699s
Handle incoming request duration: 'send_validation_receipts'
The time spent handling an incoming send_validation_receipts request
mean
0.065525s
(SD =
0.160533s)
p50
0.027237s
p95
0.220474s
p99
0.60966s