DHT Sync Lag

Measure lag time between an agent publishing data and other peers being able to see it. This scenario has two roles:

  • write: A simple job that just creates entries with a timestamp field. Those entries are linked to a known base hash. For each write, the metric ws.custom.dht_sync_sent_count is incremented.
  • record_lag: A job that repeatedly queries for links from the known base hash. It keeps track of records that it has seen and when a new record is found, and calculates the time difference between the timestamp of the new record and the current time. That time difference is then recorded as a custom metric called wt.custom.dht_sync_lag.

After each behaviour loop the metric ws.custom.dht_sync_recv_count is incremented.

Started
Mon, 13 Apr 2026 11:21:48 UTC
Peer count
250
Peer count at end
62500
Behaviours
  • record_lag (1 agent)
  • write (1 agent)
Holochain version
0.6.1-rc.4
Wind Tunnel version
0.6.0
Run ID
dht_sync_lag_24337090660_1
Create rate
The average number of created records per agent.
Mean
mean
36.76
/s
Max
Highest per-partition mean rate
max
114.5
/s
Min
Lowest per-partition mean rate
min
0
/s
Sync lag timing
The average time between when a record was created and when it is first seen by an agent.
Mean
mean
1.9444706681200001e+31
s
SD
3.69775867118e+16
s
Max
Highest per-partition mean latency
max mean
3.4028236692100006e+32
s
Min
Lowest per-partition mean latency
min mean
24.662032
s
Sync lag rate
The average number of created records discovered per agent.
Mean
mean
47.27
/s
Max
Highest per-partition mean rate
max
358.5
/s
Min
Lowest per-partition mean rate
min
0
/s
Error count
The number of errors encountered during the scenario.
698
Holochain Metrics
Hidden by default. Click to toggle visibility.
Cascade duration
Time taken to execute a cascade (get) query inside Holochain.
mean
0.013211
s
SD
0.053689
s
p50
0.011222
s
<
p95
0.132789
s
<
p99
0.287568
s
Cascade fetch errors
Count of network fetch errors during cascade calls.
total
32
over 6166.455186674s
mean rate
97.95
/s
std
1326.12
/s
p5
0
/s
p95
128.11
/s
peak
58317.74
/s
WASM usage: 'timed::created_timed_entry'
Metered usage count of the WASM ribosome for this zome function.
total
754017812
over 65124.122981758s
mean rate
3.76861037114e+11
/s
std
4.51936217957e+13
/s
p5
9.64086371e+06
/s
p95
7.86423080559e+10
/s
peak
5.84787393548e+15
/s
WASM usage: 'timed::get_timed_entries_local'
Metered usage count of the WASM ribosome for this zome function.
total
11062627950
over 6311.02044601s
mean rate
9.82698390448e+09
/s
std
1.36204554536e+10
/s
p5
7.23632026e+06
/s
p95
2.99210827316e+10
/s
peak
1.31671266001e+11
/s
WASM usage: 'timed_integrity::entry_defs'
Metered usage count of the WASM ribosome for this zome function.
total
85322144
over 65124.122981758s
mean rate
3.502725033e+10
/s
std
4.60096439184e+12
/s
p5
2.58394636e+06
/s
p95
8.5714225506e+09
/s
peak
6.61725935484e+14
/s
Zome call duration: 'timed::created_timed_entry'
Duration of this zome call as measured by Holochain internally.
mean
0.019988
s
SD
0.013504
s
p50
0.021522
s
<
p95
0.047518
s
<
p99
0.069496
s
Zome call duration: 'timed::get_timed_entries_local'
Duration of this zome call as measured by Holochain internally.
mean
0.082164
s
SD
0.359477
s
p50
0.136994
s
<
p95
0.963982
s
<
p99
1.603307
s
WASM call duration: 'timed::created_timed_entry'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.017882
s
SD
0.012744
s
p50
0.018734
s
<
p95
0.044376
s
<
p99
0.066181
s
WASM call duration: 'timed::get_timed_entries_local'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.080975
s
SD
0.359387
s
p50
0.134883
s
<
p95
0.962322
s
<
p99
1.601746
s
WASM call duration: 'timed_integrity::entry_defs'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.001937
s
SD
0.000839
s
p50
0.002244
s
<
p95
0.003505
s
<
p99
0.00419
s
Host function call duration: '__hc__create_1'
Duration of this host function call invoked from within WASM.
mean
0.003518
s
SD
0.00456
s
p50
0.002781
s
<
p95
0.013425
s
<
p99
0.022942
s
Host function call duration: '__hc__create_link_1'
Duration of this host function call invoked from within WASM.
mean
0.003483
s
SD
0.004425
s
p50
0.002798
s
<
p95
0.013195
s
<
p99
0.022982
s
Host function call duration: '__hc__get_1'
Duration of this host function call invoked from within WASM.
mean
0.014469
s
SD
0.067686
s
p50
0.014689
s
<
p95
0.162255
s
<
p99
0.366511
s
Host function call duration: '__hc__get_links_1'
Duration of this host function call invoked from within WASM.
mean
0.002951
s
SD
0.006202
s
p50
0.005783
s
<
p95
0.018432
s
<
p99
0.033177
s
Host function call duration: '__hc__zome_info_1'
Duration of this host function call invoked from within WASM.
mean
0.003812
s
SD
0.00195
s
p50
0.004499
s
<
p95
0.007665
s
<
p99
0.009309
s
Post-commit duration
Time spent executing post-commit workflows.
mean
0.001352
s
SD
0.000904
s
p50
0.001353
s
<
p95
0.00289
s
<
p99
0.004072
s
Conductor uptime
Conductor uptime gauge. A drop in the trend indicates a restart during the run.
p5
88.12
s
<
mean
913.41
s
SD
529.17
s
<
p95
1737.86
s
Integrated ops
Count of DHT ops integrated since the conductor started. Resets on restart.
total
55442
over 65124.123021851s
mean rate
5.10162297e+06
/s
std
2.7571251247e+08
/s
p5
904.61
/s
p95
1.131081954e+07
/s
peak
4.01592592593e+10
/s
Integration delay
Delay between an op being stored and being integrated. High values indicate the pipeline is falling behind.
mean
103.741986
s
SD
117.530273
s
p50
57.609939
s
<
p95
309.118519
s
<
p99
581.405434
s
Validation attempts per op
Number of validation attempts required per op. Values consistently above 1 indicate retries.
mean
1.335537
s
SD
0.304194
s
p50
1.311344
s
<
p95
2.000402
s
<
p99
2.022895
s
App validation workflow duration
Time spent running the app validation workflow.
mean
17.427723
s
SD
37.554884
s
p50
23.666149
s
<
p95
107.804274
s
<
p99
145.923246
s
Countersigning workflow duration
Time spent running the countersigning workflow.
mean
0.004212
s
SD
0.004781
s
p50
0.003357
s
<
p95
0.008565
s
<
p99
0.021384
s
Integrate DHT ops workflow duration
Time spent running the integration workflow.
mean
0.046981
s
SD
2.051176
s
p50
0.040172
s
<
p95
0.281387
s
<
p99
10.562744
s
Publish DHT ops workflow duration
Time spent running the publish workflow.
mean
0.620208
s
SD
3.012233
s
p50
0.038477
s
<
p95
8.048021
s
<
p99
14.259195
s
System validation workflow duration
Time spent running the sys validation workflow.
mean
24.123395
s
SD
69.245992
s
p50
46.970759
s
<
p95
207.275694
s
<
p99
261.759966
s
Validation receipt workflow duration
Time spent running the validation receipt workflow.
mean
0.324873
s
SD
28.84273
s
p50
0.328827
s
<
p95
36.715321
s
<
p99
163.78122
s
Authored DB connection use time
Time spent holding authored database connections.
mean
0.009228
s
SD
0.010508
s
p50
0.008651
s
<
p95
0.026769
s
<
p99
0.055816
s
DHT DB connection use time
Time spent holding DHT database connections.
mean
0.002778
s
SD
0.003495
s
p50
0.002763
s
<
p95
0.009146
s
<
p99
0.017664
s
Conductor DB connection use time
Time spent holding conductor database connections.
mean
0.000343
s
SD
0.000223
s
p50
0.000376
s
<
p95
0.000767
s
<
p99
0.001017
s
Cache DB connection use time
Time spent holding cache database connections.
mean
0.006334
s
SD
0.012693
s
p50
0.000306
s
<
p95
0.020483
s
<
p99
0.066475
s
WASM DB connection use time
Time spent holding WASM database connections.
mean
0.001149
s
SD
0.007578
s
p50
0.000243
s
<
p95
0.000512
s
<
p99
0.05943
s
Peer meta store DB connection use time
Time spent holding peer meta store database connections.
mean
0.000171
s
SD
7.1e-05
s
p50
0.000198
s
<
p95
0.000283
s
<
p99
0.000377
s
Write transaction duration
Duration of exclusive write transactions across all databases.
mean
0.007763
s
SD
0.095416
s
p50
0.004629
s
<
p95
0.292419
s
<
p99
0.368131
s
Lair keystore: signing
Duration of Ed25519 signing requests to the Lair keystore.
mean
0.00188
s
SD
0.003766
s
p50
0.001735
s
<
p95
0.010606
s
<
p99
0.019673
s
P2P request roundtrip: 'get'
Time spent sending a get request and awaiting its response.
mean
2.013831
s
SD
29.661339
s
p50
3.148749
s
<
p95
61.169522
s
<
p99
69.51129
s
P2P request roundtrip: 'send_validation_receipts'
Time spent sending a send_validation_receipts request and awaiting its response.
mean
1.02341
s
SD
26.770611
s
p50
1.849572
s
<
p95
60.01522
s
<
p99
60.053722
s
Host Metrics
User CPU usage
CPU usage by user space
26.36
%
Network receive rate (primary)
Rate of bytes received on primary network interface
p5
56
B/s
<
mean
551.26
KiB/s
SD
757.24
KiB/s
<
p95
2.05
MiB/s
Network send rate (primary)
Rate of bytes sent on primary network interface
p5
0
B/s
<
mean
92.17
KiB/s
SD
307.68
KiB/s
<
p95
283.16
KiB/s
Total bytes received
Total bytes received on primary network interface
count
440.92
GiB
mean
6.49
MiB/s
Total bytes sent
Total bytes sent on primary network interface
count
71.52
GiB
mean
1.05
MiB/s
CPU spike anomaly
CPU spike anomaly was detected
Detected

Warning CPU p99 reached 95.2%

Memory leak anomaly
Memory leak anomaly was detected
Detected

Warning Memory growing at 620.58 MiB/s

Disk full anomaly
Disk full anomaly was detected
NotDetected
Swap thrashing anomaly
Swap thrashing anomaly was detected
NotDetected
System overload anomaly
System overload anomaly was detected
Detected

Warning 76% of hosts overloaded (load5/ncpus > 1.0)

Additional Host Metrics
Hidden by default. Click to toggle visibility.
CPU usage
Total CPU usage and kernel CPU usage
Total
p5
0.5
%
<
mean
35.83
%
SD
35.49
%
<
p95
88.21
%
System
9.47
%
CPU percentiles
CPU usage percentiles
p50
20.96
%
p95
88.21
%
p99
95.17
%
CPU usage above 80%
Number of hosts above 80% CPU and mean time spent above threshold for those hosts
count
196
hosts
mean time
0.26
s
Memory used percentage
Percentage of memory used
p5
8.63
%
<
mean
11.54
%
SD
4.09
%
<
p95
15.76
%
Memory available percentage
Percentage of available memory
p5
84.24
%
<
mean
88.46
%
SD
4.09
%
<
p95
91.37
%
Max host memory used
Maximum memory usage percentage across all hosts
max
79.41
%
Max host swap used percentage
Maximum swap space usage percentage across all hosts
max
0.14
%
Memory growth rate
Rate of memory growth over time
growth
620.58
MiB/s
Disk read throughput
Disk read throughput in MiB/s
3.70
MiB/s
Disk write throughput
Disk write throughput in MiB/s
391.08
MiB/s
Disk space utilization risk
Number of hosts nearing disk space capacity by mount point
Mount Point /
0/198
hosts
Mount Point /efi-boot
0/8
hosts
Mount Point /etc/hostname
0/11
hosts
Mount Point /etc/hosts
0/11
hosts
Mount Point /etc/resolv.conf
0/11
hosts
Mount Point /nix/store
0/13
hosts
System load average
System load averages over 1, 5, and 15 minutes. This is an unnormalised value not divided by number of CPUs, so it is only meaningful if all machines have the same core count.
1 min
1.63
5 min
1.42
15 min
1.01
CPU overloaded hosts
Percentage of hosts that experienced CPU overload
76.13
%
CPU pressure
CPU pressure over 10, 60, and 300 second averages
10 second average
p5
0
%
<
mean
19.2434
%
SD
23.9543
%
<
p95
59.77
%
60 second average
18.8318
%
300 second average
16.3119
%
Memory pressure some
Memory pressure (some tasks stalled) over 10, 60, and 300 second averages
10 second average
p5
0
%
mean
0
%
SD
0
%
p95
0
%
60 second average
0.0000
%
300 second average
0.0000
%
Memory pressure full
Memory pressure (all tasks stalled) over 10, 60, and 300 second averages
10 second average
p5
0
%
mean
0
%
SD
0
%
p95
0
%
60 second average
0.0000
%
300 second average
0.0000
%
I/O pressure some
I/O pressure (some tasks stalled) over 10, 60 and 300 second averages
10 second average
p5
0
%
<
mean
5.0349
%
SD
8.2911
%
<
p95
17.91
%
60 second average
4.9576
%
300 second average
4.3006
%
I/O pressure full
I/O pressure (all tasks stalled) over 10, 60 and 300 second averages
10 second average
p5
0
%
<
mean
2.997
%
SD
6.6468
%
<
p95
11.59
%
60 second average
2.9449
%
300 second average
2.5583
%
Holochain process CPU usage
CPU usage by Holochain process
p5
0
%
<
mean
33.35
%
SD
34.17
%
<
p95
83.61
%
Holochain process memory (PSS)
Proportional Set Size memory of Holochain process
p5
228.94
KiB
<
mean
305.97
KiB
SD
87.01
KiB
<
p95
453.96
KiB
Holochain process threads
Number of threads in Holochain process
p5
5
threads
<
mean
18.34
threads
SD
14.89
threads
<
p95
44
threads
Holochain process file descriptors
Number of file descriptors used by Holochain process
p5
32
file descriptors
<
mean
56.55
file descriptors
SD
24.76
file descriptors
<
p95
91
file descriptors

Full-Arc Create (Validated) / Zero-Arc Read

A full-arc/zero-arc mixed scenario where full-arc nodes create data that gets validated and zero-arc nodes read the data. The scenario has two roles:

  • full: A full-arc conductor that creates entries with a timestamp field. Those entries get validated and then retrieved by zero-arc nodes.
  • zero: A zero-arc conductor that reads the entries created by the full-arc node(s) and records the time lag between when the entry had been created and when it was first discovered.
Started
Mon, 13 Apr 2026 19:17:46 UTC
Peer count
250
Peer count at end
62250
Behaviours
  • full (1 agent)
  • zero (1 agent)
Holochain version
0.6.1-rc.4
Wind Tunnel version
0.6.0
Run ID
full_arc_create_validated_zero_arc_read_24337090660_1
Create rate
The number of timed entries created by the full-arc node(s) per second.
Mean
mean
31.17
/s
Max
Highest per-partition mean rate
max
80.43
/s
Min
Lowest per-partition mean rate
min
9.44
/s
Fetch lag timing
For each entry, the time lag between when it was created by a full-arc node and when a zero-arc node fetched it from the network.
Mean
mean
33.557732
s
SD
15.234897
s
Max
Highest per-partition mean latency
max mean
176.583486
s
Min
Lowest per-partition mean latency
min mean
0.422535
s
Fetch rate
The number of entries per second fetched by zero-arc nodes.
Mean
mean
8.82
/s
Max
Highest per-partition mean rate
max
116
/s
Min
Lowest per-partition mean rate
min
0
/s
Open connections
The number of currently open connections to other conductors.
full-arc
p5
14
<
mean
84.97
SD
53.7
<
p95
207
zero-arc
p5
6
<
mean
34.38
SD
29.59
<
p95
91
Retrieval errors
Errors encountered by zero-arc nodes when attempting to retrieve entries created by full-arc nodes.
Total
2957
Agents affected
214 / 214
Mean per agent
Total count divided by number of partitions, rounded to nearest whole number
14
Max
Highest count in any single partition
18
Min
Lowest count in any single partition
10
Error count
The number of errors accumulated across all nodes.
2963
Holochain Metrics
Hidden by default. Click to toggle visibility.
Cascade duration
Time taken to execute a cascade (get) query inside Holochain.
mean
0.007284
s
SD
0.005778
s
p50
0.003892
s
<
p95
0.01798
s
<
p99
0.026804
s
Cascade fetch errors
Count of network fetch errors during cascade calls.
total
359
over 1579.06011387s
mean rate
820.59
/s
std
4003.43
/s
p5
0
/s
p95
3406.96
/s
peak
103779.36
/s
Zome call duration: 'timed_and_validated::create_timed_entry'
Duration of this zome call as measured by Holochain internally.
mean
0.030611
s
SD
0.017615
s
p50
0.03246
s
<
p95
0.063786
s
<
p99
0.078891
s
Zome call duration: 'timed_and_validated::get_timed_entries_network'
Duration of this zome call as measured by Holochain internally.
mean
0.729483
s
SD
102.139515
s
p50
40.162256
s
<
p95
263.465554
s
<
p99
504.539167
s
WASM call duration: 'timed_and_validated::create_timed_entry'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.028031
s
SD
0.016839
s
p50
0.029474
s
<
p95
0.060278
s
<
p99
0.075204
s
WASM call duration: 'timed_and_validated::get_timed_entries_network'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.990572
s
SD
99.942691
s
p50
51.548429
s
<
p95
263.079011
s
<
p99
474.615108
s
WASM call duration: 'timed_and_validated_integrity::entry_defs'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.002442
s
SD
0.00078
s
p50
0.002262
s
<
p95
0.003507
s
<
p99
0.004107
s
WASM call duration: 'timed_and_validated_integrity::validate'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.01169
s
SD
0.003771
s
p50
0.00272
s
<
p95
0.013618
s
<
p99
0.017546
s
Host function call duration: '__hc__create_1'
Duration of this host function call invoked from within WASM.
mean
0.007455
s
SD
0.006283
s
p50
0.00745
s
<
p95
0.020377
s
<
p99
0.026558
s
Host function call duration: '__hc__create_link_1'
Duration of this host function call invoked from within WASM.
mean
0.007585
s
SD
0.006621
s
p50
0.007321
s
<
p95
0.021108
s
<
p99
0.02872
s
Host function call duration: '__hc__get_1'
Duration of this host function call invoked from within WASM.
mean
0.594887
s
SD
2.174835
s
p50
0.641739
s
<
p95
5.307709
s
<
p99
12.108995
s
Host function call duration: '__hc__get_links_1'
Duration of this host function call invoked from within WASM.
mean
0.150068
s
SD
5.169204
s
p50
1.378903
s
<
p95
10.428497
s
<
p99
27.602516
s
Host function call duration: '__hc__must_get_valid_record_1'
Duration of this host function call invoked from within WASM.
mean
0.00831
s
SD
0.006198
s
p50
0.004929
s
<
p95
0.021292
s
<
p99
0.027867
s
Host function call duration: '__hc__zome_info_1'
Duration of this host function call invoked from within WASM.
mean
0.00573
s
SD
0.001817
s
p50
0.003821
s
<
p95
0.00782
s
<
p99
0.00916
s
Post-commit duration
Time spent executing post-commit workflows.
mean
0.001714
s
SD
0.000622
s
p50
0.00052
s
<
p95
0.002146
s
<
p99
0.003021
s
Conductor uptime
Conductor uptime gauge. A drop in the trend indicates a restart during the run.
p5
46
s
<
mean
452.27
s
SD
261.25
s
<
p95
857.99
s
Integrated ops
Count of DHT ops integrated since the conductor started. Resets on restart.
total
43018
over 8078.747070542s
mean rate
436639.37
/s
std
9.6875806e+06
/s
p5
0
/s
p95
484620.55
/s
peak
6.8049993541e+08
/s
Integration delay
Delay between an op being stored and being integrated. High values indicate the pipeline is falling behind.
mean
57.004048
s
SD
20.596704
s
p50
0.16827
s
<
p95
61.580608
s
<
p99
99.475845
s
Validation attempts per op
Number of validation attempts required per op. Values consistently above 1 indicate retries.
mean
1.462478
s
SD
0.110819
s
p50
1.363636
s
<
p95
1.529592
s
<
p99
1.590956
s
App validation workflow duration
Time spent running the app validation workflow.
mean
6.638447
s
SD
11.289757
s
p50
0.009109
s
<
p95
17.815463
s
<
p99
63.906091
s
Countersigning workflow duration
Time spent running the countersigning workflow.
mean
0.005055
s
SD
0.004381
s
p50
0.004101
s
<
p95
0.015068
s
<
p99
0.02356
s
Integrate DHT ops workflow duration
Time spent running the integration workflow.
mean
0.036786
s
SD
0.01313
s
p50
0.002841
s
<
p95
0.040287
s
<
p99
0.056993
s
Publish DHT ops workflow duration
Time spent running the publish workflow.
mean
0.121539
s
SD
0.066007
s
p50
0.007554
s
<
p95
0.112513
s
<
p99
0.191398
s
System validation workflow duration
Time spent running the sys validation workflow.
mean
3.05973
s
SD
13.638236
s
p50
0.00791
s
<
p95
29.977169
s
<
p99
75.05089
s
Validation receipt workflow duration
Time spent running the validation receipt workflow.
mean
0.575916
s
SD
7.251229
s
p50
0.002874
s
<
p95
5.459709
s
<
p99
28.786217
s
Authored DB connection use time
Time spent holding authored database connections.
mean
0.006761
s
SD
0.00527
s
p50
0.003512
s
<
p95
0.016252
s
<
p99
0.024338
s
DHT DB connection use time
Time spent holding DHT database connections.
mean
0.001738
s
SD
0.003667
s
p50
0.001551
s
<
p95
0.010117
s
<
p99
0.017576
s
Conductor DB connection use time
Time spent holding conductor database connections.
mean
0.000223
s
SD
9.5e-05
s
p50
0.000163
s
<
p95
0.000389
s
<
p99
0.000492
s
Cache DB connection use time
Time spent holding cache database connections.
mean
0.005459
s
SD
0.005289
s
p50
0.002931
s
<
p95
0.016394
s
<
p99
0.024752
s
WASM DB connection use time
Time spent holding WASM database connections.
mean
0.000266
s
SD
0.000154
s
p50
0.000258
s
<
p95
0.000404
s
<
p99
0.000918
s
Peer meta store DB connection use time
Time spent holding peer meta store database connections.
mean
0.000199
s
SD
6.3e-05
s
p50
0.000171
s
<
p95
0.000251
s
<
p99
0.000323
s
Write transaction duration
Duration of exclusive write transactions across all databases.
mean
0.004008
s
SD
0.314196
s
p50
0.001706
s
<
p95
0.819529
s
<
p99
1.195974
s
Lair keystore: signing
Duration of Ed25519 signing requests to the Lair keystore.
mean
0.003764
s
SD
0.003025
s
p50
0.001193
s
<
p95
0.008324
s
<
p99
0.017166
s
P2P request roundtrip: 'get'
Time spent sending a get request and awaiting its response.
mean
0.943813
s
SD
30.787169
s
p50
0.757357
s
<
p95
60.007532
s
<
p99
61.112445
s
P2P request roundtrip: 'get_links'
Time spent sending a get_links request and awaiting its response.
mean
2.725659
s
SD
19.507109
s
p50
0.876174
s
<
p95
60.001495
s
<
p99
60.001925
s
P2P request roundtrip: 'send_validation_receipts'
Time spent sending a send_validation_receipts request and awaiting its response.
mean
0.687543
s
SD
24.701817
s
p50
0.776698
s
<
p95
60.018629
s
<
p99
60.036445
s

Remote Call Rate

Test the throughput of remote_call operations. Each agent in this scenario waits for a certain number of peers to be available or for up to two minutes, whichever happens first, before starting its behaviour.

Started
Mon, 13 Apr 2026 10:17:03 UTC
Peer count
250
Peer count at end
62500
Behaviours
  • default (1 agent)
Holochain version
0.6.1-rc.4
Wind Tunnel version
0.6.0
Run ID
remote_call_rate_24337090660_1
Dispatch timing
The time between sending a remote call and the remote handler being invoked.
Mean
mean
-5.065982
s
SD
35.723524
s
Max
Highest per-partition mean latency
max mean
1.818002
s
Min
Lowest per-partition mean latency
min mean
-68.760866
s
Round-trip timing
The total elapsed time to get a response to the client.
Mean
mean
0.449986
s
SD
1.416604
s
Max
Highest per-partition mean latency
max mean
2.072221
s
Min
Lowest per-partition mean latency
min mean
0.022513
s
Holochain Metrics
Hidden by default. Click to toggle visibility.
Cascade fetch errors
Count of network fetch errors during cascade calls.
total
2
over 1910.259526687s
mean rate
3.57
/s
std
8.6
/s
p5
0
/s
p95
29.74
/s
peak
59.68
/s
WASM usage: 'remote_call::call_echo_timestamp'
Metered usage count of the WASM ribosome for this zome function.
total
2395930
over 2429.830248256s
mean rate
9.8859791221e+08
/s
std
5.61643407192e+09
/s
p5
1.02546237e+06
/s
p95
3.69965060169e+09
/s
peak
2.30450428938e+11
/s
WASM usage: 'remote_call::echo_timestamp'
Metered usage count of the WASM ribosome for this zome function.
total
178752
over 2429.830248256s
mean rate
1.9788134928e+08
/s
std
1.22259338805e+09
/s
p5
94967.98
/s
p95
6.4058098054e+08
/s
peak
6.09303970991e+10
/s
WASM usage: 'remote_call::init'
Metered usage count of the WASM ribosome for this zome function.
total
0
over 73831.800786833s
Not enough time data to show a trend.
Zome call duration: 'remote_call::call_echo_timestamp'
Duration of this zome call as measured by Holochain internally.
mean
0.352016
s
SD
0.499586
s
p50
0.406376
s
<
p95
1.126706
s
<
p99
2.095938
s
Zome call duration: 'remote_call::echo_timestamp'
Duration of this zome call as measured by Holochain internally.
mean
0.003478
s
SD
0.000981
s
p50
0.003793
s
<
p95
0.004964
s
<
p99
0.006179
s
WASM call duration: 'remote_call::call_echo_timestamp'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
1.951314
s
SD
3.855562
s
p50
2.931645
s
<
p95
12.098559
s
<
p99
18.072768
s
WASM call duration: 'remote_call::echo_timestamp'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.00219
s
SD
0.000646
s
p50
0.002371
s
<
p95
0.003319
s
<
p99
0.003748
s
WASM call duration: 'remote_call::init'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.003665
s
SD
0.001771
s
p50
0.003532
s
<
p95
0.00658
s
<
p99
0.008901
s
Host function call duration: '__hc__call_1'
Duration of this host function call invoked from within WASM.
mean
1.948551
s
SD
3.855513
s
p50
2.928887
s
<
p95
12.097358
s
<
p99
18.07001
s
Host function call duration: '__hc__create_1'
Duration of this host function call invoked from within WASM.
mean
0.000735
s
SD
0.000623
s
p50
0.000624
s
<
p95
0.00186
s
<
p99
0.002765
s
Host function call duration: '__hc__sys_time_1'
Duration of this host function call invoked from within WASM.
mean
2.1e-05
s
SD
1.1e-05
s
p50
1.8e-05
s
<
p95
4.1e-05
s
<
p99
4.5e-05
s
Host function call duration: '__hc__zome_info_1'
Duration of this host function call invoked from within WASM.
mean
0.000416
s
SD
0.00031
s
p50
0.00041
s
<
p95
0.001017
s
<
p99
0.001912
s
Post-commit duration
Time spent executing post-commit workflows.
mean
0.000737
s
SD
0.00082
s
p50
0.000483
s
<
p95
0.002441
s
<
p99
0.004335
s
Conductor uptime
Conductor uptime gauge. A drop in the trend indicates a restart during the run.
p5
45.98
s
<
mean
448.13
s
SD
259.49
s
<
p95
855.57
s
Integration delay
Delay between an op being stored and being integrated. High values indicate the pipeline is falling behind.
mean
33.733351
s
SD
20.684867
s
p50
24.775809
s
<
p95
63.963101
s
<
p99
107.522176
s
Validation attempts per op
Number of validation attempts required per op. Values consistently above 1 indicate retries.
mean
2.06831
s
SD
0.251252
s
p50
2.046844
s
<
p95
2.170421
s
<
p99
2.27696
s
App validation workflow duration
Time spent running the app validation workflow.
mean
0.38455
s
SD
0.865533
s
p50
0.398699
s
<
p95
2.444481
s
<
p99
3.927036
s
Countersigning workflow duration
Time spent running the countersigning workflow.
mean
0.003875
s
SD
0.003795
s
p50
0.003044
s
<
p95
0.010909
s
<
p99
0.021388
s
Integrate DHT ops workflow duration
Time spent running the integration workflow.
mean
0.005961
s
SD
0.004535
s
p50
0.004475
s
<
p95
0.011342
s
<
p99
0.028141
s
Publish DHT ops workflow duration
Time spent running the publish workflow.
mean
0.005676
s
SD
0.003032
s
p50
0.005468
s
<
p95
0.011018
s
<
p99
0.015631
s
System validation workflow duration
Time spent running the sys validation workflow.
mean
10.034573
s
SD
13.341557
s
p50
13.097713
s
<
p95
43.837729
s
<
p99
55.881315
s
Validation receipt workflow duration
Time spent running the validation receipt workflow.
mean
12.39434
s
SD
25.752484
s
p50
2.379933
s
<
p95
66.30615
s
<
p99
126.859116
s
Authored DB connection use time
Time spent holding authored database connections.
mean
0.000249
s
SD
0.000169
s
p50
0.000276
s
<
p95
0.000406
s
<
p99
0.000678
s
DHT DB connection use time
Time spent holding DHT database connections.
mean
0.000262
s
SD
0.000151
s
p50
0.000191
s
<
p95
0.000508
s
<
p99
0.000872
s
Conductor DB connection use time
Time spent holding conductor database connections.
mean
0.000184
s
SD
6.6e-05
s
p50
0.000184
s
<
p95
0.000266
s
<
p99
0.000377
s
Cache DB connection use time
Time spent holding cache database connections.
mean
0.000156
s
SD
6.5e-05
s
p50
0.000159
s
<
p95
0.000263
s
<
p99
0.000344
s
WASM DB connection use time
Time spent holding WASM database connections.
mean
0.000244
s
SD
0.000154
s
p50
0.000235
s
<
p95
0.000403
s
<
p99
0.0006
s
Peer meta store DB connection use time
Time spent holding peer meta store database connections.
mean
0.000123
s
SD
5e-05
s
p50
0.000123
s
<
p95
0.000203
s
<
p99
0.000305
s
Write transaction duration
Duration of exclusive write transactions across all databases.
mean
0.001162
s
SD
0.094262
s
p50
0.001497
s
<
p95
0.260357
s
<
p99
0.433871
s
Lair keystore: signing
Duration of Ed25519 signing requests to the Lair keystore.
mean
0.000388
s
SD
0.000425
s
p50
0.00046
s
<
p95
0.000673
s
<
p99
0.001281
s
P2P request roundtrip: 'get'
Time spent sending a get request and awaiting its response.
mean
0.902846
s
SD
29.849177
s
p50
0.675494
s
<
p95
60.0036
s
<
p99
72.040573
s
P2P request roundtrip: 'send_validation_receipts'
Time spent sending a send_validation_receipts request and awaiting its response.
mean
0.658259
s
SD
28.159316
s
p50
0.366354
s
<
p95
60.001689
s
<
p99
60.269577
s
P2P request roundtrip: 'call_remote'
Time spent sending a call_remote request and awaiting its response.
mean
0.798794
s
SD
29.519563
s
p50
0.714672
s
<
p95
60.001556
s
<
p99
63.551708
s
Host Metrics
User CPU usage
CPU usage by user space
5.91
%
Network receive rate (primary)
Rate of bytes received on primary network interface
p5
69
B/s
<
mean
399.30
KiB/s
SD
593.36
KiB/s
<
p95
1.75
MiB/s
Network send rate (primary)
Rate of bytes sent on primary network interface
p5
4
B/s
<
mean
51.99
KiB/s
SD
70.16
KiB/s
<
p95
183.16
KiB/s
Total bytes received
Total bytes received on primary network interface
count
150.85
GiB
mean
2.09
MiB/s
Total bytes sent
Total bytes sent on primary network interface
count
17.57
GiB
mean
249.50
KiB/s
CPU spike anomaly
CPU spike anomaly was detected
NotDetected
Memory leak anomaly
Memory leak anomaly was detected
Detected

Warning Memory growing at 692.49 MiB/s

Disk full anomaly
Disk full anomaly was detected
NotDetected
Swap thrashing anomaly
Swap thrashing anomaly was detected
NotDetected
System overload anomaly
System overload anomaly was detected
NotDetected
Additional Host Metrics
Hidden by default. Click to toggle visibility.
CPU usage
Total CPU usage and kernel CPU usage
Total
p5
0.4
%
<
mean
9.02
%
SD
15.71
%
<
p95
41.63
%
System
3.11
%
CPU percentiles
CPU usage percentiles
p50
4.71
%
p95
41.63
%
p99
84.54
%
CPU usage above 80%
Number of hosts above 80% CPU and mean time spent above threshold for those hosts
count
119
hosts
mean time
0.01
s
Memory used percentage
Percentage of memory used
p5
6.05
%
<
mean
10.42
%
SD
4.66
%
<
p95
15.05
%
Memory available percentage
Percentage of available memory
p5
84.95
%
<
mean
89.58
%
SD
4.66
%
<
p95
93.95
%
Max host memory used
Maximum memory usage percentage across all hosts
max
77.66
%
Max host swap used percentage
Maximum swap space usage percentage across all hosts
max
0.14
%
Memory growth rate
Rate of memory growth over time
growth
692.49
MiB/s
Disk read throughput
Disk read throughput in MiB/s
0.05
MiB/s
Disk write throughput
Disk write throughput in MiB/s
35.30
MiB/s
Disk space utilization risk
Number of hosts nearing disk space capacity by mount point
Mount Point /
0/187
hosts
Mount Point /efi-boot
0/8
hosts
Mount Point /etc/hostname
0/12
hosts
Mount Point /etc/hosts
0/12
hosts
Mount Point /etc/resolv.conf
0/12
hosts
Mount Point /nix/store
0/13
hosts
System load average
System load averages over 1, 5, and 15 minutes. This is an unnormalised value not divided by number of CPUs, so it is only meaningful if all machines have the same core count.
1 min
0.31
5 min
0.26
15 min
0.19
CPU overloaded hosts
Percentage of hosts that experienced CPU overload
0.00
%
CPU pressure
CPU pressure over 10, 60, and 300 second averages
10 second average
p5
0
%
<
mean
2.4384
%
SD
9.261
%
<
p95
5.65
%
60 second average
2.3842
%
300 second average
2.0446
%
Memory pressure some
Memory pressure (some tasks stalled) over 10, 60, and 300 second averages
10 second average
p5
0
%
mean
0
%
SD
0
%
p95
0
%
60 second average
0.0000
%
300 second average
0.0000
%
Memory pressure full
Memory pressure (all tasks stalled) over 10, 60, and 300 second averages
10 second average
p5
0
%
mean
0
%
SD
0
%
p95
0
%
60 second average
0.0000
%
300 second average
0.0000
%
I/O pressure some
I/O pressure (some tasks stalled) over 10, 60 and 300 second averages
10 second average
p5
0
%
<
mean
0.3737
%
SD
2.2356
%
<
p95
1.71
%
60 second average
0.4150
%
300 second average
0.7491
%
I/O pressure full
I/O pressure (all tasks stalled) over 10, 60 and 300 second averages
10 second average
p5
0
%
<
mean
0.2641
%
SD
2.0005
%
<
p95
1.14
%
60 second average
0.3045
%
300 second average
0.6398
%
Holochain process CPU usage
CPU usage by Holochain process
p5
0
%
<
mean
6.45
%
SD
13.32
%
<
p95
21.28
%
Holochain process memory (PSS)
Proportional Set Size memory of Holochain process
p5
207.90
KiB
<
mean
236.23
KiB
SD
41.53
KiB
<
p95
262.84
KiB
Holochain process threads
Number of threads in Holochain process
p5
5
threads
<
mean
21
threads
SD
14.88
threads
<
p95
43
threads
Holochain process file descriptors
Number of file descriptors used by Holochain process
p5
32
file descriptors
<
mean
62.59
file descriptors
SD
25.23
file descriptors
<
p95
96
file descriptors

Remote Signals

This scenario tests the throughput of remote_signals operations.

Started
Mon, 13 Apr 2026 10:48:58 UTC
Peer count
500
Peer count at end
125000
Behaviours
  • default (1 agent)
Holochain version
0.6.1-rc.4
Wind Tunnel version
0.6.0
Run ID
remote_signals_24337090660_1
Round trip time
The time from origin signal dispatch to origin receive of the remote side's response signal.
mean
1.98196
s
SD
14.419376
s
p50
0.259922
s
<
p95
1.812093
s
<
p99
43.545632
s
Timeouts
The number of timeouts waiting for the remote side's response signal (default timeout is 20 seconds).
map[count:192 mean_rate_per_second:4.04174413e+06 measurement_duration:map[nanos:665087685 secs:71228] p5_rate_per_second:35.56 p95_rate_per_second:136201.72 peak_rate_per_second:4.74e+10 rate_trend:map[trend:[1.47 5.84 9.92 37.19 47.9 192.5 21.3 108.54 976.6 509 748.75 955.14 9168.54 4538.92 3584.61 3768.69 7440.35 5936.53 3571.74 327652.48 91499.68 7438.85 7595.7 142336.39 10179.35 10385.87 7630.55 7131.03 20715.49 40652.96 7960.29 8732.64 559019.63 2.08702433e+06 20824.75 32878.43 24981.87 20405.67 17665.96 3.67383039e+06 28357.3 24723.53 16603.42 597064.78 24116.09 33704.73 89715.68 82397.58 7.1047561e+06 62907.97 241409.15 1.751621088e+07 57918.86 5.060708053e+07 36417.57 56638.81 24455.97 29386.11 1.50484058e+06 48443.89 879145.68 518121.97 932758.19 31706.56 256898.45 55812.38 32750.67 86729.71 98843.73 52173.2 276206.13 63695.04 3.51431969e+07 6.56120398e+06 74779.79 84743.78 1.53437111e+06 8.39810446e+06 3.160955479e+07 350985.67 1.553554309e+07 1.56861183e+07 57067.83 421655.84 1.93017474e+06 125489.9 1.12036502e+06 64791.56 991488.25 102111.43 88168.12 60951.38 56693.6 95166.75 55246.55 599040.03 228062.48 440698.98 740883.54 1.3118545086e+08 265547.08 693472.39 1.788847383e+07 1.838089797e+07 3.804247061e+07 42742.36 1.76675033e+06 156511.77 37401.99 26949.34 2.370703958e+07 615298.71 73692.6 18180.12 30158.16 60204.52 26056.08 3.410605384e+07 5.312965565e+07 3.69318442e+06 1.38471764e+06 378154.91 40317.67 14793.04 42959.6 92512.72 621141.7 19783.04 41882.55 39071.54 10646.3 19107.69 58656.21 104276.48 60728.8 38641.76 44938.71 16660.69 161284.58 46693.88 3.15433392e+06 39273.96 53352.83 83214.44 41911.09 95308.03 63379.78 20755.84 162771.36 24068.39 12737.14 20440.59 24483.61 8317.76 85731.19 6.39729333e+06 14801.87 54946.44 17391.83 81732.66 19388.55 21434.91 21608.66 28405.58 18737.81 8812.37 5844.07 464053.94 82924.44 12375.57 127651.86 1.9118571602e+08 39804.43 34247.89 5559.39 5236.06 19309.55 131746.27 26449.56 34217.16 28412.27 14302.22 1.26828141e+06 136028.9 31166.97 7375.4 77813.41 9751.19 15701.72 78179.82 57745.24 5834.5 24334.95 141270.09 3462.9 0.98 0.98 0.98 0.98 0.98 0.98 0.98 0.98 0.98 0.98 0.98 0.98 0.98 0.98 0.98 0.98 0.98 1.07 0.98 0.97] window_duration:10s] std_rate_per_second:2.8254829081e+08]
Holochain Metrics
Hidden by default. Click to toggle visibility.
Cascade fetch errors
Count of network fetch errors during cascade calls.
total
26
over 1838.21686197s
mean rate
14573.33
/s
std
940212.84
/s
p5
0
/s
p95
9962.27
/s
peak
8e+07
/s
WASM usage: 'remote_signal::init'
Metered usage count of the WASM ribosome for this zome function.
total
0
over 71918.093850906s
Not enough time data to show a trend.
WASM usage: 'remote_signal::recv_remote_signal'
Metered usage count of the WASM ribosome for this zome function.
total
36755229
over 70910.084321617s
mean rate
9.29481546899e+09
/s
std
1.27645162393e+11
/s
p5
42691.31
/s
p95
2.02707218393e+10
/s
peak
1.01283224125e+13
/s
WASM usage: 'remote_signal::signal_request'
Metered usage count of the WASM ribosome for this zome function.
total
3404835
over 71918.093850906s
mean rate
1.02225536013e+09
/s
std
4.07929062358e+10
/s
p5
0
/s
p95
1.0824064551e+09
/s
peak
5.28701086957e+12
/s
Zome call duration: 'remote_signal::recv_remote_signal'
Duration of this zome call as measured by Holochain internally.
mean
0.003706
s
SD
0.001042
s
p50
0.003956
s
<
p95
0.005028
s
<
p99
0.006295
s
Zome call duration: 'remote_signal::signal_request'
Duration of this zome call as measured by Holochain internally.
mean
0.004196
s
SD
0.001131
s
p50
0.004324
s
<
p95
0.005608
s
<
p99
0.006543
s
WASM call duration: 'remote_signal::init'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.003488
s
SD
0.00231
s
p50
0.003132
s
<
p95
0.006703
s
<
p99
0.011337
s
WASM call duration: 'remote_signal::recv_remote_signal'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.002284
s
SD
0.00068
s
p50
0.002446
s
<
p95
0.003262
s
<
p99
0.003731
s
WASM call duration: 'remote_signal::signal_request'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.002408
s
SD
0.000672
s
p50
0.002473
s
<
p95
0.003347
s
<
p99
0.003656
s
Host function call duration: '__hc__create_1'
Duration of this host function call invoked from within WASM.
mean
0.00067
s
SD
0.001131
s
p50
0.00054
s
<
p95
0.001401
s
<
p99
0.002528
s
Host function call duration: '__hc__emit_signal_1'
Duration of this host function call invoked from within WASM.
mean
9e-05
s
SD
2.6e-05
s
p50
9.6e-05
s
<
p95
0.00013
s
<
p99
0.000148
s
Host function call duration: '__hc__send_remote_signal_1'
Duration of this host function call invoked from within WASM.
mean
9.8e-05
s
SD
4e-05
s
p50
9.9e-05
s
<
p95
0.00016
s
<
p99
0.000209
s
Host function call duration: '__hc__sys_time_1'
Duration of this host function call invoked from within WASM.
mean
1.6e-05
s
SD
9e-06
s
p50
1.7e-05
s
<
p95
2.8e-05
s
<
p99
3.6e-05
s
Host function call duration: '__hc__zome_info_1'
Duration of this host function call invoked from within WASM.
mean
0.000625
s
SD
0.000806
s
p50
0.000399
s
<
p95
0.001792
s
<
p99
0.004917
s
Emitted local signals
Count of local signals emitted via the emit_signal host function.
total
1721
over 70910.084179427s
mean rate
1.06018128e+06
/s
std
5.349922113e+07
/s
p5
2.1
/s
p95
844282.11
/s
peak
6.8e+09
/s
Sent remote signals
Count of remote signals sent via the send_remote_signal host function.
total
49
over 71918.093715428s
mean rate
667604.81
/s
std
1.781736882e+07
/s
p5
0.2
/s
p95
778461.93
/s
peak
1.93829787234e+09
/s
Post-commit duration
Time spent executing post-commit workflows.
mean
0.000849
s
SD
0.00103
s
p50
0.000545
s
<
p95
0.00248
s
<
p99
0.004936
s
Conductor uptime
Conductor uptime gauge. A drop in the trend indicates a restart during the run.
p5
45.16
s
<
mean
448.67
s
SD
260.02
s
<
p95
855.24
s
Integration delay
Delay between an op being stored and being integrated. High values indicate the pipeline is falling behind.
mean
54.612832
s
SD
35.321845
s
p50
34.976407
s
<
p95
86.632809
s
<
p99
179.459743
s
Validation attempts per op
Number of validation attempts required per op. Values consistently above 1 indicate retries.
mean
2.031908
s
SD
0.338361
s
p50
2.014329
s
<
p95
2.088412
s
<
p99
2.161538
s
App validation workflow duration
Time spent running the app validation workflow.
mean
1.47942
s
SD
2.309146
s
p50
1.009595
s
<
p95
6.279799
s
<
p99
10.203155
s
Countersigning workflow duration
Time spent running the countersigning workflow.
mean
0.005534
s
SD
0.007918
s
p50
0.003437
s
<
p95
0.019246
s
<
p99
0.027546
s
Integrate DHT ops workflow duration
Time spent running the integration workflow.
mean
0.008795
s
SD
0.029764
s
p50
0.005385
s
<
p95
0.029928
s
<
p99
0.114676
s
Publish DHT ops workflow duration
Time spent running the publish workflow.
mean
0.007243
s
SD
0.003873
s
p50
0.006694
s
<
p95
0.013881
s
<
p99
0.020031
s
System validation workflow duration
Time spent running the sys validation workflow.
mean
26.058897
s
SD
23.802028
s
p50
22.968067
s
<
p95
69.041145
s
<
p99
94.550204
s
Validation receipt workflow duration
Time spent running the validation receipt workflow.
mean
7.901093
s
SD
19.350395
s
p50
0.640443
s
<
p95
49.730297
s
<
p99
90.727366
s
Authored DB connection use time
Time spent holding authored database connections.
mean
0.000267
s
SD
0.000124
s
p50
0.00029
s
<
p95
0.000486
s
<
p99
0.00076
s
DHT DB connection use time
Time spent holding DHT database connections.
mean
0.000579
s
SD
0.000537
s
p50
0.000196
s
<
p95
0.001432
s
<
p99
0.002812
s
Conductor DB connection use time
Time spent holding conductor database connections.
mean
0.000217
s
SD
0.000111
s
p50
0.000219
s
<
p95
0.000354
s
<
p99
0.000546
s
Cache DB connection use time
Time spent holding cache database connections.
mean
0.000166
s
SD
8.7e-05
s
p50
0.000162
s
<
p95
0.000302
s
<
p99
0.000449
s
WASM DB connection use time
Time spent holding WASM database connections.
mean
0.000257
s
SD
0.000259
s
p50
0.000214
s
<
p95
0.000489
s
<
p99
0.001405
s
Peer meta store DB connection use time
Time spent holding peer meta store database connections.
mean
0.000143
s
SD
6.3e-05
s
p50
0.000141
s
<
p95
0.000234
s
<
p99
0.000344
s
Write transaction duration
Duration of exclusive write transactions across all databases.
mean
0.001709
s
SD
0.118865
s
p50
0.002265
s
<
p95
0.334571
s
<
p99
0.499142
s
Lair keystore: signing
Duration of Ed25519 signing requests to the Lair keystore.
mean
0.000667
s
SD
0.000424
s
p50
0.000736
s
<
p95
0.00123
s
<
p99
0.00219
s
P2P request roundtrip: 'get'
Time spent sending a get request and awaiting its response.
mean
2.673272
s
SD
29.552832
s
p50
1.172245
s
<
p95
60.004244
s
<
p99
64.53671
s
P2P request roundtrip: 'send_validation_receipts'
Time spent sending a send_validation_receipts request and awaiting its response.
mean
0.798324
s
SD
26.815853
s
p50
0.374769
s
<
p95
60.001574
s
<
p99
60.002057
s
P2P received remote signals
Count of remote signals received.
total
1722
over 70910.084156867s
mean rate
405637.97
/s
std
7.05749007e+06
/s
p5
1.8
/s
p95
880723.01
/s
peak
7.7650429799e+08
/s
Host Metrics
User CPU usage
CPU usage by user space
7.64
%
Network receive rate (primary)
Rate of bytes received on primary network interface
p5
169
B/s
<
mean
399.46
KiB/s
SD
729.27
KiB/s
<
p95
2.23
MiB/s
Network send rate (primary)
Rate of bytes sent on primary network interface
p5
0
B/s
<
mean
45.07
KiB/s
SD
79.86
KiB/s
<
p95
198.62
KiB/s
Total bytes received
Total bytes received on primary network interface
count
252.17
GiB
mean
3.64
MiB/s
Total bytes sent
Total bytes sent on primary network interface
count
28.25
GiB
mean
417.87
KiB/s
CPU spike anomaly
CPU spike anomaly was detected
Detected

Warning CPU p99 reached 93.4%

Memory leak anomaly
Memory leak anomaly was detected
Detected

Warning Memory growing at 746.85 MiB/s

Disk full anomaly
Disk full anomaly was detected
NotDetected
Swap thrashing anomaly
Swap thrashing anomaly was detected
NotDetected
System overload anomaly
System overload anomaly was detected
Detected

Warning 3% of hosts overloaded (load5/ncpus > 1.0)

Additional Host Metrics
Hidden by default. Click to toggle visibility.
CPU usage
Total CPU usage and kernel CPU usage
Total
p5
0.65
%
<
mean
11.64
%
SD
20.05
%
<
p95
65.64
%
System
4.00
%
CPU percentiles
CPU usage percentiles
p50
2.56
%
p95
65.64
%
p99
93.38
%
CPU usage above 80%
Number of hosts above 80% CPU and mean time spent above threshold for those hosts
count
199
hosts
mean time
0.05
s
Memory used percentage
Percentage of memory used
p5
8.6
%
<
mean
11.26
%
SD
3.53
%
<
p95
15.26
%
Memory available percentage
Percentage of available memory
p5
84.74
%
<
mean
88.74
%
SD
3.53
%
<
p95
91.4
%
Max host memory used
Maximum memory usage percentage across all hosts
max
79.33
%
Max host swap used percentage
Maximum swap space usage percentage across all hosts
max
0.14
%
Memory growth rate
Rate of memory growth over time
growth
746.85
MiB/s
Disk read throughput
Disk read throughput in MiB/s
0.14
MiB/s
Disk write throughput
Disk write throughput in MiB/s
79.83
MiB/s
Disk space utilization risk
Number of hosts nearing disk space capacity by mount point
Mount Point /
0/207
hosts
Mount Point /boot
0/1
hosts
Mount Point /efi-boot
0/6
hosts
Mount Point /etc/hostname
0/11
hosts
Mount Point /etc/hosts
0/11
hosts
Mount Point /etc/resolv.conf
0/11
hosts
Mount Point /nix/store
0/12
hosts
System load average
System load averages over 1, 5, and 15 minutes. This is an unnormalised value not divided by number of CPUs, so it is only meaningful if all machines have the same core count.
1 min
0.45
5 min
0.37
15 min
0.27
CPU overloaded hosts
Percentage of hosts that experienced CPU overload
3.02
%
CPU pressure
CPU pressure over 10, 60, and 300 second averages
10 second average
p5
0
%
<
mean
5.3849
%
SD
14.2631
%
<
p95
39.53
%
60 second average
5.1981
%
300 second average
4.3284
%
Memory pressure some
Memory pressure (some tasks stalled) over 10, 60, and 300 second averages
10 second average
p5
0
%
mean
0
%
SD
0
%
p95
0
%
60 second average
0.0000
%
300 second average
0.0000
%
Memory pressure full
Memory pressure (all tasks stalled) over 10, 60, and 300 second averages
10 second average
p5
0
%
mean
0
%
SD
0
%
p95
0
%
60 second average
0.0000
%
300 second average
0.0000
%
I/O pressure some
I/O pressure (some tasks stalled) over 10, 60 and 300 second averages
10 second average
p5
0
%
<
mean
0.6595
%
SD
2.8108
%
<
p95
4.27
%
60 second average
0.6348
%
300 second average
0.5294
%
I/O pressure full
I/O pressure (all tasks stalled) over 10, 60 and 300 second averages
10 second average
p5
0
%
<
mean
0.4401
%
SD
2.1292
%
<
p95
2.46
%
60 second average
0.4196
%
300 second average
0.3429
%
Holochain process CPU usage
CPU usage by Holochain process
p5
0
%
<
mean
5.81
%
SD
14.11
%
<
p95
37.36
%
Holochain process memory (PSS)
Proportional Set Size memory of Holochain process
p5
188.99
KiB
<
mean
230.81
KiB
SD
48.13
KiB
<
p95
267.96
KiB
Holochain process threads
Number of threads in Holochain process
p5
5
threads
<
mean
15.42
threads
SD
16.7
threads
<
p95
40
threads
Holochain process file descriptors
Number of file descriptors used by Holochain process
p5
32
file descriptors
<
mean
50.66
file descriptors
SD
24.82
file descriptors
<
p95
93
file descriptors

Two-party countersigning

This scenario tests the performance of countersigning operations. There are two roles: initiate and participate. The participants commit an entry to advertise that they are willing to participate in sessions. They listen for sessions and participate in one at a time.

Started
Tue, 14 Apr 2026 00:26:16 UTC
Peer count
3
Peer count at end
9
Behaviours
  • initiate (1 agent)
  • participate (1 agent)
Holochain version
0.6.1-rc.4
Wind Tunnel version
0.6.0
Run ID
two_party_countersigning_24337090660_1
Session accepted -- timing
The duration of the session from acceptance to completion.
Mean
mean
0.169055
s
SD
0.102085
s
Max
Highest per-partition mean latency
max mean
0.179087
s
Min
Lowest per-partition mean latency
min mean
0.159023
s
Session accepted -- success rate
The number of accepted sessions that completed per second.
Mean
mean
23.54
/s
Max
Highest per-partition mean rate
max
29.07
/s
Min
Lowest per-partition mean rate
min
18
/s
Session accepted -- success ratio
Fraction of accepted sessions that completed successfully (0–1).
0.9825
Session initiated -- timing
The duration of the session from initiation to completion.
Mean
mean
0.216614
s
SD
0.152236
s
Max
Highest per-partition mean latency
max mean
0.216614
s
Min
Lowest per-partition mean latency
min mean
0.216614
s
Session initiated -- success rate
The number of initiated sessions that completed per second.
Mean
mean
40.07
/s
Max
Highest per-partition mean rate
max
40.07
/s
Min
Lowest per-partition mean rate
min
40.07
/s
Session initiated -- success ratio
Fraction of initiated sessions that completed successfully (0–1).
0.9904
Holochain Metrics
Hidden by default. Click to toggle visibility.
Cascade duration
Time taken to execute a cascade (get) query inside Holochain.
mean
0.011319
s
SD
0.003787
s
p50
0.010208
s
<
p95
0.014894
s
<
p99
0.015349
s
WASM usage: 'countersigning::accept_two_party'
Metered usage count of the WASM ribosome for this zome function.
total
38155606
over 280.442145698s
mean rate
4.54257156e+06
/s
std
3.42467264e+06
/s
p5
490432.84
/s
p95
7.54231059e+06
/s
peak
7.55470804e+06
/s
WASM usage: 'countersigning::call_remote_signal'
Metered usage count of the WASM ribosome for this zome function.
total
46937314
over 281.134893473s
mean rate
1.402561112e+07
/s
std
1.656766276e+07
/s
p5
602263.35
/s
p95
3.768033483e+07
/s
peak
3.825670894e+07
/s
WASM usage: 'countersigning::commit_two_party'
Metered usage count of the WASM ribosome for this zome function.
total
671986855
over 281.134893473s
mean rate
1.521532476e+08
/s
std
1.9981975797e+08
/s
p5
756610.41
/s
p95
4.4012936913e+08
/s
peak
4.4533556249e+08
/s
WASM usage: 'countersigning::init'
Metered usage count of the WASM ribosome for this zome function.
total
0
over 281.134893473s
Not enough time data to show a trend.
WASM usage: 'countersigning::initiator_hello'
Metered usage count of the WASM ribosome for this zome function.
total
0
over 280.003272124s
Not enough time data to show a trend.
WASM usage: 'countersigning::list_participants'
Metered usage count of the WASM ribosome for this zome function.
total
68102075
over 280.003272124s
mean rate
243218.57
/s
std
107143.76
/s
p5
142368.28
/s
p95
430508.16
/s
peak
453974.31
/s
WASM usage: 'countersigning::participant_hello'
Metered usage count of the WASM ribosome for this zome function.
total
0
over 280.442145698s
Not enough time data to show a trend.
WASM usage: 'countersigning::start_two_party'
Metered usage count of the WASM ribosome for this zome function.
total
213799049
over 280.003272124s
mean rate
763558.7
/s
std
135953.68
/s
p5
606129.29
/s
p95
1.05924613e+06
/s
peak
1.07812297e+06
/s
WASM usage: 'countersigning_integrity::entry_defs'
Metered usage count of the WASM ribosome for this zome function.
total
27040209
over 291.134991052s
mean rate
1.789887285e+07
/s
std
1.119127522e+07
/s
p5
0
/s
p95
3.286989654e+07
/s
peak
3.351202391e+07
/s
Zome call duration: 'countersigning::accept_two_party'
Duration of this zome call as measured by Holochain internally.
mean
0.023417
s
SD
0.00489
s
p50
0.022547
s
<
p95
0.032242
s
<
p99
0.032332
s
Zome call duration: 'countersigning::call_remote_signal'
Duration of this zome call as measured by Holochain internally.
mean
0.003797
s
SD
0.000636
s
p50
0.004242
s
<
p95
0.005017
s
<
p99
0.005405
s
Zome call duration: 'countersigning::commit_two_party'
Duration of this zome call as measured by Holochain internally.
mean
0.027768
s
SD
0.01327
s
p50
0.02605
s
<
p95
0.055511
s
<
p99
0.057689
s
Zome call duration: 'countersigning::initiator_hello'
Duration of this zome call as measured by Holochain internally.
mean
0.003159
s
SD
0
s
p50
0.003159
s
p95
0.003159
s
p99
0.003159
s
Zome call duration: 'countersigning::list_participants'
Duration of this zome call as measured by Holochain internally.
mean
0.020168
s
SD
0.003702
s
p50
0.019205
s
<
p95
0.023444
s
<
p99
0.023868
s
Zome call duration: 'countersigning::participant_hello'
Duration of this zome call as measured by Holochain internally.
mean
0.007363
s
SD
0.001034
s
p50
0.008397
s
p95
0.008397
s
p99
0.008397
s
Zome call duration: 'countersigning::start_two_party'
Duration of this zome call as measured by Holochain internally.
mean
0.031589
s
SD
0.00293
s
p50
0.029936
s
<
p95
0.034338
s
<
p99
0.03452
s
WASM call duration: 'countersigning::accept_two_party'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.021973
s
SD
0.00488
s
p50
0.021122
s
<
p95
0.030765
s
<
p99
0.030852
s
WASM call duration: 'countersigning::call_remote_signal'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.002311
s
SD
0.000331
s
p50
0.002483
s
<
p95
0.003012
s
<
p99
0.003228
s
WASM call duration: 'countersigning::commit_two_party'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.02652
s
SD
0.013184
s
p50
0.024785
s
<
p95
0.054031
s
<
p99
0.056226
s
WASM call duration: 'countersigning::init'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.012565
s
SD
0.002074
s
p50
0.012035
s
<
p95
0.015328
s
p99
0.015328
s
WASM call duration: 'countersigning::initiator_hello'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.002054
s
SD
0
s
p50
0.002054
s
p95
0.002054
s
p99
0.002054
s
WASM call duration: 'countersigning::list_participants'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.018261
s
SD
0.003764
s
p50
0.017264
s
<
p95
0.021647
s
<
p99
0.02208
s
WASM call duration: 'countersigning::participant_hello'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.006045
s
SD
0.0011
s
p50
0.007145
s
p95
0.007145
s
p99
0.007145
s
WASM call duration: 'countersigning::start_two_party'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.030135
s
SD
0.002946
s
p50
0.028439
s
<
p95
0.032929
s
<
p99
0.033109
s
WASM call duration: 'countersigning_integrity::entry_defs'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.001981
s
SD
0.000175
s
p50
0.001964
s
<
p95
0.002224
s
<
p99
0.002253
s
Host function call duration: '__hc__accept_countersigning_preflight_request_1'
Duration of this host function call invoked from within WASM.
mean
0.003198
s
SD
0.000541
s
p50
0.002822
s
<
p95
0.004335
s
<
p99
0.004703
s
Host function call duration: '__hc__agent_info_1'
Duration of this host function call invoked from within WASM.
mean
5.6e-05
s
SD
1.2e-05
s
p50
5.4e-05
s
<
p95
5.8e-05
s
<
p99
5.9e-05
s
Host function call duration: '__hc__call_1'
Duration of this host function call invoked from within WASM.
mean
0.015042
s
SD
0.00367
s
p50
0.014099
s
<
p95
0.020426
s
<
p99
0.020803
s
Host function call duration: '__hc__create_1'
Duration of this host function call invoked from within WASM.
mean
0.000637
s
SD
5e-05
s
p50
0.000615
s
<
p95
0.000702
s
<
p99
0.000758
s
Host function call duration: '__hc__create_link_1'
Duration of this host function call invoked from within WASM.
mean
0.000532
s
SD
3.1e-05
s
p50
0.000563
s
p95
0.000563
s
p99
0.000563
s
Host function call duration: '__hc__emit_signal_1'
Duration of this host function call invoked from within WASM.
mean
8.4e-05
s
SD
7e-06
s
p50
8.9e-05
s
<
p95
9.4e-05
s
<
p99
0.000103
s
Host function call duration: '__hc__get_agent_activity_1'
Duration of this host function call invoked from within WASM.
mean
0.019424
s
SD
0.011899
s
p50
0.017682
s
<
p95
0.044057
s
<
p99
0.046178
s
Host function call duration: '__hc__get_links_1'
Duration of this host function call invoked from within WASM.
mean
0.011442
s
SD
0.003788
s
p50
0.010332
s
<
p95
0.015015
s
<
p99
0.015469
s
Host function call duration: '__hc__sys_time_1'
Duration of this host function call invoked from within WASM.
mean
1.1e-05
s
SD
1e-06
s
p50
1.1e-05
s
<
p95
1.2e-05
s
p99
1.2e-05
s
Host function call duration: '__hc__zome_info_1'
Duration of this host function call invoked from within WASM.
mean
0.003467
s
SD
0.000723
s
p50
0.003319
s
<
p95
0.005395
s
p99
0.005395
s
Emitted local signals
Count of local signals emitted via the emit_signal host function.
total
1135
over 281.135040912s
mean rate
414.04
/s
std
503.18
/s
p5
3.53
/s
p95
1167.13
/s
peak
1210.44
/s
Post-commit duration
Time spent executing post-commit workflows.
mean
0.000384
s
SD
6.4e-05
s
p50
0.000403
s
<
p95
0.000543
s
<
p99
0.000653
s
Conductor uptime
Conductor uptime gauge. A drop in the trend indicates a restart during the run.
p5
16.82
s
<
mean
151.77
s
SD
86.56
s
<
p95
286.83
s
Integrated ops
Count of DHT ops integrated since the conductor started. Resets on restart.
total
6843
over 291.135139488s
mean rate
30.95
/s
std
21.13
/s
p5
0
/s
p95
82.05
/s
peak
86.55
/s
Integration delay
Delay between an op being stored and being integrated. High values indicate the pipeline is falling behind.
mean
0.152351
s
SD
0.368505
s
p50
0.062555
s
<
p95
0.985462
s
<
p99
2.456252
s
Validation attempts per op
Number of validation attempts required per op. Values consistently above 1 indicate retries.
mean
1.662878
s
SD
0.233469
s
p50
1.537229
s
<
p95
1.99302
s
<
p99
1.995572
s
App validation workflow duration
Time spent running the app validation workflow.
mean
0.013359
s
SD
0.269355
s
p50
0.014488
s
<
p95
0.070218
s
<
p99
2.436153
s
Countersigning workflow duration
Time spent running the countersigning workflow.
mean
0.01342
s
SD
0.003995
s
p50
0.013122
s
<
p95
0.014823
s
<
p99
0.016261
s
Integrate DHT ops workflow duration
Time spent running the integration workflow.
mean
0.004495
s
SD
0.00134
s
p50
0.004709
s
<
p95
0.006434
s
<
p99
0.006835
s
Publish DHT ops workflow duration
Time spent running the publish workflow.
mean
0.017383
s
SD
0.006177
s
p50
0.010997
s
<
p95
0.024928
s
<
p99
0.02735
s
System validation workflow duration
Time spent running the sys validation workflow.
mean
0.016387
s
SD
0.036844
s
p50
0.013174
s
<
p95
0.036214
s
<
p99
0.352572
s
Validation receipt workflow duration
Time spent running the validation receipt workflow.
mean
0.026972
s
SD
0.01003
s
p50
0.021845
s
<
p95
0.040163
s
<
p99
0.042386
s
Witnessing workflow duration
Time spent running the witnessing workflow.
mean
0.003747
s
SD
0.002917
s
p50
0.001403
s
<
p95
0.006971
s
<
p99
0.007068
s
Authored DB connection use time
Time spent holding authored database connections.
mean
0.001511
s
SD
0.000624
s
p50
0.000858
s
<
p95
0.002324
s
<
p99
0.002556
s
DHT DB connection use time
Time spent holding DHT database connections.
mean
0.001823
s
SD
0.000686
s
p50
0.00161
s
<
p95
0.00238
s
<
p99
0.002537
s
Conductor DB connection use time
Time spent holding conductor database connections.
mean
0.000249
s
SD
3e-05
s
p50
0.000235
s
<
p95
0.00031
s
<
p99
0.000316
s
Cache DB connection use time
Time spent holding cache database connections.
mean
0.0007
s
SD
0.000653
s
p50
0.000222
s
<
p95
0.001904
s
<
p99
0.001998
s
WASM DB connection use time
Time spent holding WASM database connections.
mean
0.000246
s
SD
2.1e-05
s
p50
0.000233
s
<
p95
0.000275
s
p99
0.000275
s
Peer meta store DB connection use time
Time spent holding peer meta store database connections.
mean
0.000265
s
SD
5e-05
s
p50
0.000235
s
<
p95
0.000313
s
<
p99
0.000314
s
Write transaction duration
Duration of exclusive write transactions across all databases.
mean
0.00245
s
SD
0.089392
s
p50
0.002139
s
<
p95
0.255386
s
p99
0.255386
s
Lair keystore: signing
Duration of Ed25519 signing requests to the Lair keystore.
mean
0.000546
s
SD
0.000137
s
p50
0.000541
s
<
p95
0.000641
s
<
p99
0.001586
s
P2P request roundtrip: 'get'
Time spent sending a get request and awaiting its response.
mean
0.005254
s
SD
0.007189
s
p50
0.005408
s
<
p95
0.014661
s
<
p99
0.064164
s
P2P request roundtrip: 'get_agent_activity'
Time spent sending a get_agent_activity request and awaiting its response.
mean
0.018764
s
SD
0.012306
s
p50
0.006819
s
<
p95
0.041179
s
<
p99
0.044688
s
P2P request roundtrip: 'send_validation_receipts'
Time spent sending a send_validation_receipts request and awaiting its response.
mean
0.015997
s
SD
0.006311
s
p50
0.014213
s
<
p95
0.019705
s
<
p99
0.024321
s
P2P request roundtrip: 'call_remote'
Time spent sending a call_remote request and awaiting its response.
mean
0.014307
s
SD
0.003638
s
p50
0.013294
s
<
p95
0.01961
s
<
p99
0.020004
s
Host Metrics
User CPU usage
CPU usage by user space
32.08
%
Network receive rate (primary)
Rate of bytes received on primary network interface
p5
50.16
KiB/s
<
mean
144.16
KiB/s
SD
265.46
KiB/s
<
p95
220.23
KiB/s
Network send rate (primary)
Rate of bytes sent on primary network interface
p5
18.41
KiB/s
<
mean
119.18
KiB/s
SD
51.24
KiB/s
<
p95
211.84
KiB/s
Total bytes received
Total bytes received on primary network interface
count
126.70
MiB
mean
432.47
KiB/s
Total bytes sent
Total bytes sent on primary network interface
count
104.74
MiB
mean
357.53
KiB/s
CPU spike anomaly
CPU spike anomaly was detected
NotDetected
Memory leak anomaly
Memory leak anomaly was detected
Detected

Warning Memory growing at 177.15 MiB/s

Disk full anomaly
Disk full anomaly was detected
NotDetected
Swap thrashing anomaly
Swap thrashing anomaly was detected
NotDetected
System overload anomaly
System overload anomaly was detected
NotDetected
Additional Host Metrics
Hidden by default. Click to toggle visibility.
CPU usage
Total CPU usage and kernel CPU usage
Total
p5
28
%
<
mean
44.15
%
SD
8.66
%
<
p95
58.2
%
System
12.06
%
CPU percentiles
CPU usage percentiles
p50
44.11
%
p95
58.20
%
p99
64.95
%
CPU usage above 80%
Number of hosts above 80% CPU and mean time spent above threshold for those hosts
count
0
hosts
mean time
0.00
s
Memory used percentage
Percentage of memory used
p5
7.94
%
<
mean
9.86
%
SD
0.75
%
<
p95
10.75
%
Memory available percentage
Percentage of available memory
p5
89.25
%
<
mean
90.14
%
SD
0.75
%
<
p95
92.06
%
Max host memory used
Maximum memory usage percentage across all hosts
max
10.10
%
Max host swap used percentage
Maximum swap space usage percentage across all hosts
max
0.00
%
Memory growth rate
Rate of memory growth over time
growth
177.15
MiB/s
Disk read throughput
Disk read throughput in MiB/s
0.00
MiB/s
Disk write throughput
Disk write throughput in MiB/s
0.00
MiB/s
Disk space utilization risk
Number of hosts nearing disk space capacity by mount point
Mount Point /
0/3
hosts
System load average
System load averages over 1, 5, and 15 minutes. This is an unnormalised value not divided by number of CPUs, so it is only meaningful if all machines have the same core count.
1 min
1.07
5 min
0.51
15 min
0.20
CPU overloaded hosts
Percentage of hosts that experienced CPU overload
0.00
%
CPU pressure
CPU pressure over 10, 60, and 300 second averages
10 second average
p5
4.06
%
<
mean
16.0972
%
SD
6.689
%
<
p95
25.76
%
60 second average
13.1810
%
300 second average
6.0520
%
Memory pressure some
Memory pressure (some tasks stalled) over 10, 60, and 300 second averages
10 second average
p5
0
%
mean
0
%
SD
0
%
p95
0
%
60 second average
0.0000
%
300 second average
0.0000
%
Memory pressure full
Memory pressure (all tasks stalled) over 10, 60, and 300 second averages
10 second average
p5
0
%
mean
0
%
SD
0
%
p95
0
%
60 second average
0.0000
%
300 second average
0.0000
%
I/O pressure some
I/O pressure (some tasks stalled) over 10, 60 and 300 second averages
10 second average
p5
0.84
%
<
mean
2.1586
%
SD
0.5379
%
<
p95
2.93
%
60 second average
1.7819
%
300 second average
0.8412
%
I/O pressure full
I/O pressure (all tasks stalled) over 10, 60 and 300 second averages
10 second average
p5
0.63
%
<
mean
1.3532
%
SD
0.3351
%
<
p95
1.85
%
60 second average
1.1259
%
300 second average
0.5389
%
Holochain process CPU usage
CPU usage by Holochain process
p5
25.23
%
<
mean
39.44
%
SD
8.07
%
<
p95
53.27
%
Holochain process memory (PSS)
Proportional Set Size memory of Holochain process
p5
121.94
KiB
<
mean
242.82
KiB
SD
48.67
KiB
<
p95
286.11
KiB
Holochain process threads
Number of threads in Holochain process
p5
14
threads
<
mean
40.41
threads
SD
30.42
threads
<
p95
84
threads
Holochain process file descriptors
Number of file descriptors used by Holochain process
p5
64
file descriptors
<
mean
72.72
file descriptors
SD
5.03
file descriptors
<
p95
79
file descriptors

Validation Receipts

Creates an entry, wait for required validation receipts, then repeat. Records the amount of time it took to accumulate the required number of receipts for all DHT operations. This is measured to the nearest 20ms so that we don't keep the agent too busy checking for receipts.

Each agent in this scenario waits for a certain number of peers to be available or for up to two minutes, whichever happens first, before starting its behaviour.

By default, this scenario will wait for a complete set of validation receipts before committing the next record. If the NO_VALIDATION_COMPLETE environment variable is set, it will instead publish new records on every round, building up an ever-growing list of action hashes to check on.

Started
Mon, 13 Apr 2026 12:21:33 UTC
Peer count
250
Peer count at end
62500
Behaviours
  • default (1 agent)
Holochain version
0.6.1-rc.4
Wind Tunnel version
0.6.0
Run ID
validation_receipts_24337090660_1
Receipts complete timing
The amount of time between publishing a record and receiving the required number of validation receipts.
Mean
mean
206.148201
s
SD
52.065695
s
Max
Highest per-partition mean latency
max mean
801.752832
s
Min
Lowest per-partition mean latency
min mean
17.174386
s
Receipts complete rate
The number of complete validation receipt sets collected per second.
Mean
mean
0.37
/s
Max
Highest per-partition mean rate
max
1
/s
Min
Lowest per-partition mean rate
min
0
/s
Holochain Metrics
Hidden by default. Click to toggle visibility.
Cascade fetch errors
Count of network fetch errors during cascade calls.
total
20
over 1641.055409212s
mean rate
9.17
/s
std
26.34
/s
p5
0
/s
p95
100.24
/s
peak
146.5
/s
WASM usage: 'crud::create_sample_entry'
Metered usage count of the WASM ribosome for this zome function.
total
0
over 66361.882028583s
Not enough time data to show a trend.
WASM usage: 'crud::get_sample_entry_validation_receipts'
Metered usage count of the WASM ribosome for this zome function.
total
1078440
over 66361.882028583s
mean rate
2.6243796435e+08
/s
std
1.19691562898e+10
/s
p5
1418.98
/s
p95
2.8774709814e+08
/s
peak
1.17196341746e+12
/s
WASM usage: 'crud::init'
Metered usage count of the WASM ribosome for this zome function.
total
0
over 66361.882028583s
Not enough time data to show a trend.
WASM usage: 'crud_integrity::entry_defs'
Metered usage count of the WASM ribosome for this zome function.
total
0
over 66361.882028583s
Not enough time data to show a trend.
Zome call duration: 'crud::create_sample_entry'
Duration of this zome call as measured by Holochain internally.
mean
0.006517
s
SD
0.002509
s
p50
0.00696
s
<
p95
0.010527
s
<
p99
0.013435
s
Zome call duration: 'crud::get_sample_entry_validation_receipts'
Duration of this zome call as measured by Holochain internally.
mean
0.005185
s
SD
0.001727
s
p50
0.004956
s
<
p95
0.00751
s
<
p99
0.01063
s
WASM call duration: 'crud::create_sample_entry'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.005356
s
SD
0.002147
s
p50
0.005709
s
<
p95
0.008676
s
<
p99
0.011623
s
WASM call duration: 'crud::get_sample_entry_validation_receipts'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.003751
s
SD
0.001441
s
p50
0.003419
s
<
p95
0.005766
s
<
p99
0.008373
s
WASM call duration: 'crud::init'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.002708
s
SD
0.001748
s
p50
0.002486
s
<
p95
0.005552
s
<
p99
0.008647
s
WASM call duration: 'crud_integrity::entry_defs'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.001583
s
SD
0.000579
s
p50
0.001659
s
<
p95
0.002496
s
<
p99
0.003105
s
Host function call duration: '__hc__create_1'
Duration of this host function call invoked from within WASM.
mean
0.000611
s
SD
0.000589
s
p50
0.000555
s
<
p95
0.001401
s
<
p99
0.003524
s
Host function call duration: '__hc__get_validation_receipts_1'
Duration of this host function call invoked from within WASM.
mean
0.001568
s
SD
0.001072
s
p50
0.001086
s
<
p95
0.003182
s
<
p99
0.005259
s
Host function call duration: '__hc__zome_info_1'
Duration of this host function call invoked from within WASM.
mean
0.00274
s
SD
0.001067
s
p50
0.002839
s
<
p95
0.004551
s
<
p99
0.005703
s
Post-commit duration
Time spent executing post-commit workflows.
mean
0.000562
s
SD
0.000405
s
p50
0.000517
s
<
p95
0.001325
s
<
p99
0.001981
s
Conductor uptime
Conductor uptime gauge. A drop in the trend indicates a restart during the run.
p5
46.16
s
<
mean
450.71
s
SD
260.18
s
<
p95
856.4
s
Integration delay
Delay between an op being stored and being integrated. High values indicate the pipeline is falling behind.
mean
31.13865
s
SD
17.262132
s
p50
25.38775
s
<
p95
54.338926
s
<
p99
65.134821
s
Validation attempts per op
Number of validation attempts required per op. Values consistently above 1 indicate retries.
mean
2.049629
s
SD
0.343664
s
p50
2.03515
s
<
p95
2.105465
s
<
p99
2.202104
s
App validation workflow duration
Time spent running the app validation workflow.
mean
0.540375
s
SD
1.237559
s
p50
0.484711
s
<
p95
3.415077
s
<
p99
5.8082
s
Countersigning workflow duration
Time spent running the countersigning workflow.
mean
0.003916
s
SD
0.003763
s
p50
0.003052
s
<
p95
0.012415
s
<
p99
0.018826
s
Integrate DHT ops workflow duration
Time spent running the integration workflow.
mean
0.009314
s
SD
0.030779
s
p50
0.004809
s
<
p95
0.029994
s
<
p99
0.074805
s
Publish DHT ops workflow duration
Time spent running the publish workflow.
mean
0.005509
s
SD
0.002898
s
p50
0.005429
s
<
p95
0.010504
s
<
p99
0.014475
s
System validation workflow duration
Time spent running the sys validation workflow.
mean
9.999666
s
SD
13.358906
s
p50
11.144598
s
<
p95
40.242038
s
<
p99
51.374361
s
Validation receipt workflow duration
Time spent running the validation receipt workflow.
mean
14.367172
s
SD
21.345522
s
p50
5.427819
s
<
p95
61.41631
s
<
p99
104.460894
s
Authored DB connection use time
Time spent holding authored database connections.
mean
0.000254
s
SD
0.00011
s
p50
0.000283
s
<
p95
0.00041
s
<
p99
0.000618
s
DHT DB connection use time
Time spent holding DHT database connections.
mean
0.000517
s
SD
0.000416
s
p50
0.00026
s
<
p95
0.00117
s
<
p99
0.002145
s
Conductor DB connection use time
Time spent holding conductor database connections.
mean
0.000156
s
SD
5.1e-05
s
p50
0.000159
s
<
p95
0.000234
s
<
p99
0.000306
s
Cache DB connection use time
Time spent holding cache database connections.
mean
0.000141
s
SD
6.1e-05
s
p50
0.000149
s
<
p95
0.000247
s
<
p99
0.000348
s
WASM DB connection use time
Time spent holding WASM database connections.
mean
0.000223
s
SD
0.000135
s
p50
0.000208
s
<
p95
0.000415
s
<
p99
0.00082
s
Peer meta store DB connection use time
Time spent holding peer meta store database connections.
mean
0.000129
s
SD
5.1e-05
s
p50
0.000137
s
<
p95
0.000212
s
<
p99
0.00028
s
Write transaction duration
Duration of exclusive write transactions across all databases.
mean
0.001436
s
SD
0.094782
s
p50
0.001281
s
<
p95
0.251193
s
<
p99
0.406327
s
Lair keystore: signing
Duration of Ed25519 signing requests to the Lair keystore.
mean
0.000319
s
SD
0.000385
s
p50
0.000393
s
<
p95
0.000759
s
<
p99
0.002516
s
P2P request roundtrip: 'get'
Time spent sending a get request and awaiting its response.
mean
0.963434
s
SD
29.54677
s
p50
0.688414
s
<
p95
60.002598
s
<
p99
61.825149
s
P2P request roundtrip: 'send_validation_receipts'
Time spent sending a send_validation_receipts request and awaiting its response.
mean
0.332365
s
SD
28.900504
s
p50
0.272538
s
<
p95
60.001673
s
<
p99
60.002001
s
Host Metrics
User CPU usage
CPU usage by user space
4.46
%
Network receive rate (primary)
Rate of bytes received on primary network interface
p5
336
B/s
<
mean
221.12
KiB/s
SD
436.62
KiB/s
<
p95
1.29
MiB/s
Network send rate (primary)
Rate of bytes sent on primary network interface
p5
0
B/s
<
mean
30.78
KiB/s
SD
57.67
KiB/s
<
p95
120.84
KiB/s
Total bytes received
Total bytes received on primary network interface
count
118.70
GiB
mean
1.83
MiB/s
Total bytes sent
Total bytes sent on primary network interface
count
15.22
GiB
mean
240.54
KiB/s
CPU spike anomaly
CPU spike anomaly was detected
NotDetected
Memory leak anomaly
Memory leak anomaly was detected
Detected

Warning Memory growing at 309.56 MiB/s

Disk full anomaly
Disk full anomaly was detected
NotDetected
Swap thrashing anomaly
Swap thrashing anomaly was detected
NotDetected
System overload anomaly
System overload anomaly was detected
NotDetected
Additional Host Metrics
Hidden by default. Click to toggle visibility.
CPU usage
Total CPU usage and kernel CPU usage
Total
p5
0.75
%
<
mean
6.79
%
SD
13.83
%
<
p95
35.27
%
System
2.33
%
CPU percentiles
CPU usage percentiles
p50
2.07
%
p95
35.27
%
p99
80.04
%
CPU usage above 80%
Number of hosts above 80% CPU and mean time spent above threshold for those hosts
count
111
hosts
mean time
0.01
s
Memory used percentage
Percentage of memory used
p5
6.66
%
<
mean
10.26
%
SD
3.57
%
<
p95
14.61
%
Memory available percentage
Percentage of available memory
p5
85.39
%
<
mean
89.74
%
SD
3.57
%
<
p95
93.34
%
Max host memory used
Maximum memory usage percentage across all hosts
max
76.24
%
Max host swap used percentage
Maximum swap space usage percentage across all hosts
max
0.14
%
Memory growth rate
Rate of memory growth over time
growth
309.56
MiB/s
Disk read throughput
Disk read throughput in MiB/s
0.13
MiB/s
Disk write throughput
Disk write throughput in MiB/s
62.01
MiB/s
Disk space utilization risk
Number of hosts nearing disk space capacity by mount point
Mount Point /
0/154
hosts
Mount Point /boot
0/1
hosts
Mount Point /efi-boot
0/8
hosts
Mount Point /etc/hostname
0/13
hosts
Mount Point /etc/hosts
0/13
hosts
Mount Point /etc/resolv.conf
0/13
hosts
Mount Point /nix/store
0/14
hosts
System load average
System load averages over 1, 5, and 15 minutes. This is an unnormalised value not divided by number of CPUs, so it is only meaningful if all machines have the same core count.
1 min
0.23
5 min
0.21
15 min
0.24
CPU overloaded hosts
Percentage of hosts that experienced CPU overload
0.00
%
CPU pressure
CPU pressure over 10, 60, and 300 second averages
10 second average
p5
0
%
<
mean
1.6258
%
SD
6.6416
%
<
p95
5.58
%
60 second average
1.6210
%
300 second average
1.5352
%
Memory pressure some
Memory pressure (some tasks stalled) over 10, 60, and 300 second averages
10 second average
p5
0
%
mean
0
%
SD
0
%
p95
0
%
60 second average
0.0000
%
300 second average
0.0000
%
Memory pressure full
Memory pressure (all tasks stalled) over 10, 60, and 300 second averages
10 second average
p5
0
%
mean
0
%
SD
0
%
p95
0
%
60 second average
0.0000
%
300 second average
0.0000
%
I/O pressure some
I/O pressure (some tasks stalled) over 10, 60 and 300 second averages
10 second average
p5
0
%
<
mean
0.5042
%
SD
2.5648
%
<
p95
2.28
%
60 second average
0.4957
%
300 second average
0.4526
%
I/O pressure full
I/O pressure (all tasks stalled) over 10, 60 and 300 second averages
10 second average
p5
0
%
<
mean
0.4006
%
SD
2.2653
%
<
p95
1.8
%
60 second average
0.3915
%
300 second average
0.3528
%
Holochain process CPU usage
CPU usage by Holochain process
p5
0
%
<
mean
4.69
%
SD
12.48
%
<
p95
20.86
%
Holochain process memory (PSS)
Proportional Set Size memory of Holochain process
p5
217.29
KiB
<
mean
245.58
KiB
SD
40.58
KiB
<
p95
285.43
KiB
Holochain process threads
Number of threads in Holochain process
p5
1
threads
<
mean
14.8
threads
SD
13.79
threads
<
p95
39
threads
Holochain process file descriptors
Number of file descriptors used by Holochain process
p5
0
file descriptors
<
mean
49.06
file descriptors
SD
29.47
file descriptors
<
p95
92
file descriptors

Write/get_agent_activity

A scenario where write peers write entries, while get_agent_activity peers each query a single write agent's activity with get_agent_activity.

Before a target write peer and the requesting get_agent_activity peer are in sync, this will measure the get_agent_activity call performance over a network. Once a write peer reaches sync with a get_agent_activity peer, the write peer will publish their actions and entries, and so the get_agent_activity calls will likely have most of the data they need locally. At that point this measures the database query performance and code paths through host functions.

Started
Mon, 13 Apr 2026 13:07:56 UTC
Peer count
250
Peer count at end
62500
Behaviours
  • get_agent_activity (1 agent)
  • write (1 agent)
Holochain version
0.6.1-rc.4
Wind Tunnel version
0.6.0
Run ID
write_get_agent_activity_24337090660_1
Observed chain advancement
The maximum highest-observed action sequence per write agent, aggregated across all reading agents. The reading dimension is collapsed by taking the maximum observed value for each write agent (any reader successfully tracking that point counts as propagation). Captures how far readers tracked writers’ chains during the run.
Mean max (across write agents)
Mean of the highest chain head value observed per write agent. Each write agent contributes its maximum — the value seen by any reader.
3988.73
Max
Highest chain head value observed for any single write agent.
8131
Write agents observed
33
get_agent_activity_full zome call timing
The time taken to call the zome function that retrieves information on a write peer's source chain.
Mean
mean
0.50817
s
SD
0.747894
s
Max
Highest per-partition mean latency
max mean
2.663684
s
Min
Lowest per-partition mean latency
min mean
0.071138
s
Error count
The number of errors accumulated.
592
Holochain Metrics
Hidden by default. Click to toggle visibility.
Cascade duration
Time taken to execute a cascade (get) query inside Holochain.
mean
0.006429
s
SD
0.00563
s
p50
0.002223
s
<
p95
0.014947
s
<
p99
0.026508
s
Cascade fetch errors
Count of network fetch errors during cascade calls.
total
2
over 1700.464841695s
mean rate
122.02
/s
std
1268.38
/s
p5
0
/s
p95
274.67
/s
peak
60251.85
/s
Zome call duration: 'agent_activity::announce_write_behaviour'
Duration of this zome call as measured by Holochain internally.
mean
0.008338
s
SD
0.00383
s
p50
0.008136
s
<
p95
0.011938
s
<
p99
0.033752
s
Zome call duration: 'agent_activity::create_sample_entry'
Duration of this zome call as measured by Holochain internally.
mean
0.009109
s
SD
0.004313
s
p50
0.009421
s
<
p95
0.018152
s
<
p99
0.026314
s
Zome call duration: 'agent_activity::get_agent_activity_full'
Duration of this zome call as measured by Holochain internally.
mean
0.292364
s
SD
0.36986
s
p50
0.344431
s
<
p95
0.896695
s
<
p99
2.321454
s
Zome call duration: 'agent_activity::get_random_agent_with_write_behaviour'
Duration of this zome call as measured by Holochain internally.
mean
0.013933
s
SD
0.657278
s
p50
0.012849
s
<
p95
1.433679
s
<
p99
4.384395
s
WASM call duration: 'agent_activity::announce_write_behaviour'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.006757
s
SD
0.003412
s
p50
0.006484
s
<
p95
0.009352
s
<
p99
0.030278
s
WASM call duration: 'agent_activity::create_sample_entry'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.007235
s
SD
0.003717
s
p50
0.007324
s
<
p95
0.015089
s
<
p99
0.022804
s
WASM call duration: 'agent_activity::get_agent_activity_full'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.377508
s
SD
0.71596
s
p50
0.429798
s
<
p95
1.68282
s
<
p99
3.321485
s
WASM call duration: 'agent_activity::get_random_agent_with_write_behaviour'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.013304
s
SD
1.460169
s
p50
0.011922
s
<
p95
1.68942
s
<
p99
7.914789
s
WASM call duration: 'agent_activity_integrity::entry_defs'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.001741
s
SD
0.000824
s
p50
0.001807
s
<
p95
0.002499
s
<
p99
0.002852
s
Host function call duration: '__hc__agent_info_1'
Duration of this host function call invoked from within WASM.
mean
5.2e-05
s
SD
1.9e-05
s
p50
5.4e-05
s
<
p95
8e-05
s
<
p99
0.000118
s
Host function call duration: '__hc__create_1'
Duration of this host function call invoked from within WASM.
mean
0.001716
s
SD
0.002561
s
p50
0.001096
s
<
p95
0.007954
s
<
p99
0.013085
s
Host function call duration: '__hc__create_link_1'
Duration of this host function call invoked from within WASM.
mean
0.000681
s
SD
0.00034
s
p50
0.000637
s
<
p95
0.001185
s
<
p99
0.002067
s
Host function call duration: '__hc__get_agent_activity_1'
Duration of this host function call invoked from within WASM.
mean
0.374901
s
SD
0.715733
s
p50
0.42694
s
<
p95
1.680292
s
<
p99
3.315718
s
Host function call duration: '__hc__get_links_1'
Duration of this host function call invoked from within WASM.
mean
0.007847
s
SD
1.460121
s
p50
0.006342
s
<
p95
1.683644
s
<
p99
7.909065
s
Host function call duration: '__hc__random_bytes_1'
Duration of this host function call invoked from within WASM.
mean
2.5e-05
s
SD
9e-06
s
p50
2.6e-05
s
<
p95
4e-05
s
<
p99
5e-05
s
Host function call duration: '__hc__zome_info_1'
Duration of this host function call invoked from within WASM.
mean
0.003261
s
SD
0.001053
s
p50
0.003179
s
<
p95
0.005058
s
<
p99
0.0059
s
Post-commit duration
Time spent executing post-commit workflows.
mean
0.000961
s
SD
0.000515
s
p50
0.000674
s
<
p95
0.001787
s
<
p99
0.002564
s
Conductor uptime
Conductor uptime gauge. A drop in the trend indicates a restart during the run.
p5
45.94
s
<
mean
454.75
s
SD
263.48
s
<
p95
865.93
s
Integrated ops
Count of DHT ops integrated since the conductor started. Resets on restart.
total
26632
over 7113.316351541s
mean rate
1.1539117e+06
/s
std
5.7343297e+06
/s
p5
3379.62
/s
p95
3.67278266e+06
/s
peak
2.5124323461e+08
/s
Integration delay
Delay between an op being stored and being integrated. High values indicate the pipeline is falling behind.
mean
86.400975
s
SD
62.882725
s
p50
54.607255
s
<
p95
185.602718
s
<
p99
287.475991
s
Validation attempts per op
Number of validation attempts required per op. Values consistently above 1 indicate retries.
mean
1.710014
s
SD
0.366274
s
p50
1.99831
s
<
p95
2.012153
s
<
p99
2.022217
s
App validation workflow duration
Time spent running the app validation workflow.
mean
9.103243
s
SD
11.071121
s
p50
9.14114
s
<
p95
33.418895
s
<
p99
46.041631
s
Countersigning workflow duration
Time spent running the countersigning workflow.
mean
0.004318
s
SD
0.003572
s
p50
0.003717
s
<
p95
0.010644
s
<
p99
0.018164
s
Integrate DHT ops workflow duration
Time spent running the integration workflow.
mean
0.025733
s
SD
1.244882
s
p50
0.021566
s
<
p95
1.689333
s
<
p99
7.009879
s
Publish DHT ops workflow duration
Time spent running the publish workflow.
mean
0.333386
s
SD
0.669918
s
p50
0.009616
s
<
p95
1.529996
s
<
p99
3.720702
s
System validation workflow duration
Time spent running the sys validation workflow.
mean
21.576723
s
SD
29.635274
s
p50
26.740608
s
<
p95
91.289317
s
<
p99
133.032747
s
Validation receipt workflow duration
Time spent running the validation receipt workflow.
mean
0.341196
s
SD
17.069489
s
p50
0.787208
s
<
p95
40.02914
s
<
p99
91.524013
s
Authored DB connection use time
Time spent holding authored database connections.
mean
0.002307
s
SD
0.002278
s
p50
0.000783
s
<
p95
0.007106
s
<
p99
0.008776
s
DHT DB connection use time
Time spent holding DHT database connections.
mean
0.006499
s
SD
0.005005
s
p50
0.00408
s
<
p95
0.015354
s
<
p99
0.020887
s
Conductor DB connection use time
Time spent holding conductor database connections.
mean
0.000232
s
SD
0.000106
s
p50
0.000233
s
<
p95
0.000455
s
<
p99
0.000611
s
Cache DB connection use time
Time spent holding cache database connections.
mean
0.002712
s
SD
0.003105
s
p50
0.000362
s
<
p95
0.00709
s
<
p99
0.016098
s
WASM DB connection use time
Time spent holding WASM database connections.
mean
0.000271
s
SD
0.000236
s
p50
0.00025
s
<
p95
0.000448
s
<
p99
0.001274
s
Peer meta store DB connection use time
Time spent holding peer meta store database connections.
mean
0.000167
s
SD
7.1e-05
s
p50
0.000176
s
<
p95
0.000274
s
<
p99
0.000481
s
Write transaction duration
Duration of exclusive write transactions across all databases.
mean
0.004872
s
SD
0.097381
s
p50
0.002575
s
<
p95
0.287445
s
<
p99
0.36086
s
Lair keystore: signing
Duration of Ed25519 signing requests to the Lair keystore.
mean
0.00093
s
SD
0.001464
s
p50
0.00063
s
<
p95
0.002672
s
<
p99
0.008737
s
P2P request roundtrip: 'get'
Time spent sending a get request and awaiting its response.
mean
4.801918
s
SD
29.595097
s
p50
2.129112
s
<
p95
60.170799
s
<
p99
61.293708
s
P2P request roundtrip: 'get_links'
Time spent sending a get_links request and awaiting its response.
mean
7.053762
s
SD
23.271026
s
p50
0.893737
s
<
p95
60.001652
s
<
p99
60.002486
s
P2P request roundtrip: 'get_agent_activity'
Time spent sending a get_agent_activity request and awaiting its response.
mean
0.46653
s
SD
29.231398
s
p50
0.518826
s
<
p95
60.00423
s
<
p99
61.978554
s
P2P request roundtrip: 'send_validation_receipts'
Time spent sending a send_validation_receipts request and awaiting its response.
mean
2.581051
s
SD
24.803062
s
p50
7.084707
s
<
p95
60.002381
s
<
p99
60.007224
s
Host Metrics
User CPU usage
CPU usage by user space
31.29
%
Network receive rate (primary)
Rate of bytes received on primary network interface
p5
130
B/s
<
mean
738.33
KiB/s
SD
674.53
KiB/s
<
p95
1.99
MiB/s
Network send rate (primary)
Rate of bytes sent on primary network interface
p5
130
B/s
<
mean
101.84
KiB/s
SD
229.84
KiB/s
<
p95
311.40
KiB/s
Total bytes received
Total bytes received on primary network interface
count
199.83
GiB
mean
4.62
MiB/s
Total bytes sent
Total bytes sent on primary network interface
count
27.30
GiB
mean
646.00
KiB/s
CPU spike anomaly
CPU spike anomaly was detected
Detected

Warning CPU p99 reached 95.3%

Memory leak anomaly
Memory leak anomaly was detected
Detected

Warning Memory growing at 686.09 MiB/s

Disk full anomaly
Disk full anomaly was detected
NotDetected
Swap thrashing anomaly
Swap thrashing anomaly was detected
NotDetected
System overload anomaly
System overload anomaly was detected
Detected

Warning 32% of hosts overloaded (load5/ncpus > 1.0)

Additional Host Metrics
Hidden by default. Click to toggle visibility.
CPU usage
Total CPU usage and kernel CPU usage
Total
p5
0.86
%
<
mean
43.84
%
SD
32.86
%
<
p95
89.89
%
System
12.55
%
CPU percentiles
CPU usage percentiles
p50
48.17
%
p95
89.89
%
p99
95.31
%
CPU usage above 80%
Number of hosts above 80% CPU and mean time spent above threshold for those hosts
count
216
hosts
mean time
0.25
s
Memory used percentage
Percentage of memory used
p5
6.5
%
<
mean
11.08
%
SD
4.51
%
<
p95
15.64
%
Memory available percentage
Percentage of available memory
p5
84.36
%
<
mean
88.92
%
SD
4.51
%
<
p95
93.5
%
Max host memory used
Maximum memory usage percentage across all hosts
max
78.26
%
Max host swap used percentage
Maximum swap space usage percentage across all hosts
max
0.14
%
Memory growth rate
Rate of memory growth over time
growth
686.09
MiB/s
Disk read throughput
Disk read throughput in MiB/s
0.91
MiB/s
Disk write throughput
Disk write throughput in MiB/s
239.38
MiB/s
Disk space utilization risk
Number of hosts nearing disk space capacity by mount point
Mount Point /
0/218
hosts
Mount Point /efi-boot
0/7
hosts
Mount Point /etc/hostname
0/10
hosts
Mount Point /etc/hosts
0/10
hosts
Mount Point /etc/resolv.conf
0/10
hosts
Mount Point /nix/store
0/12
hosts
System load average
System load averages over 1, 5, and 15 minutes. This is an unnormalised value not divided by number of CPUs, so it is only meaningful if all machines have the same core count.
1 min
1.98
5 min
1.44
15 min
0.78
CPU overloaded hosts
Percentage of hosts that experienced CPU overload
31.95
%
CPU pressure
CPU pressure over 10, 60, and 300 second averages
10 second average
p5
0
%
<
mean
26.8692
%
SD
28.9312
%
<
p95
80.99
%
60 second average
25.3898
%
300 second average
18.3097
%
Memory pressure some
Memory pressure (some tasks stalled) over 10, 60, and 300 second averages
10 second average
p5
0
%
mean
0
%
SD
0
%
p95
0
%
60 second average
0.0000
%
300 second average
0.0000
%
Memory pressure full
Memory pressure (all tasks stalled) over 10, 60, and 300 second averages
10 second average
p5
0
%
mean
0
%
SD
0
%
p95
0
%
60 second average
0.0000
%
300 second average
0.0000
%
I/O pressure some
I/O pressure (some tasks stalled) over 10, 60 and 300 second averages
10 second average
p5
0
%
<
mean
4.754
%
SD
7.2773
%
<
p95
14.37
%
60 second average
4.5680
%
300 second average
3.4045
%
I/O pressure full
I/O pressure (all tasks stalled) over 10, 60 and 300 second averages
10 second average
p5
0
%
<
mean
2.8087
%
SD
6.1286
%
<
p95
8.81
%
60 second average
2.7113
%
300 second average
2.0726
%
Holochain process CPU usage
CPU usage by Holochain process
p5
0
%
<
mean
42.48
%
SD
31.31
%
<
p95
85.95
%
Holochain process memory (PSS)
Proportional Set Size memory of Holochain process
p5
187.92
KiB
<
mean
308.90
KiB
SD
75.41
KiB
<
p95
401.89
KiB
Holochain process threads
Number of threads in Holochain process
p5
5
threads
<
mean
24.92
threads
SD
16.23
threads
<
p95
48
threads
Holochain process file descriptors
Number of file descriptors used by Holochain process
p5
32
file descriptors
<
mean
67.17
file descriptors
SD
21.36
file descriptors
<
p95
91
file descriptors

Write/get_agent_activity with volatile conductors

A scenario where write peers write entries, while get_agent_activity_volatile peers each query a single write agents activity with get_agent_activity but shutdown and restart their conductors at semi-random intervals.

Before a target write peer and the requesting get_agent_activity_volatile peer are in sync, this will measure the get_agent_activity call performance over a network. Once a write peer reaches sync with a get_agent_activity peer, the write peer will publish their actions and entries, and so the get_agent_activity calls will likely have most of the data they need locally. At that point this measures the database query performance and code paths through host functions.

Started
Mon, 13 Apr 2026 13:35:51 UTC
Peer count
250
Peer count at end
62500
Behaviours
  • get_agent_activity_volatile (1 agent)
  • write (1 agent)
Holochain version
0.6.1-rc.4
Wind Tunnel version
0.6.0
Run ID
write_get_agent_activity_volatile_24337090660_1
Highest observed action_seq
The rate at which zero-arc readers observe new chain heads on the writers' chains via get_agent_activity. This reflects the DHT's ability to propagate agent activity ops and make them available to querying peers.
Mean
mean
182.19
/s
Max
Highest per-partition mean rate
max
1111.88
/s
Min
Lowest per-partition mean rate
min
0
/s
get_agent_activity_full zome call timing
The time taken to call the zome function that retrieves information on a write peer's source chain.
Mean
mean
0.279271
s
SD
0.773605
s
Max
Highest per-partition mean latency
max mean
1.977977
s
Min
Lowest per-partition mean latency
min mean
0.00515
s
Running volatile conductors count
The number of conductors running by get_agent_activity_volatile peers.
p5
88
conductors
<
mean
151.56
conductors
SD
29.98
conductors
<
p95
172
conductors
Volatile conductor total running duration
The total running duration of a get_agent_activity_volatile conductor.
Mean
mean
991.490288
s
SD
1039.841353
s
Max
Highest per-partition mean latency
max mean
2026
s
Min
Lowest per-partition mean latency
min mean
25
s
Volatile conductor running duration
The duration that a get_agent_activity_volatile conductor was running before being stopped.
Mean
mean
140.582863
s
SD
93.7867
s
Max
Highest per-partition mean latency
max mean
900.130793
s
Min
Lowest per-partition mean latency
min mean
52.919396
s
Volatile conductor stopped duration
The duration that a get_agent_activity_volatile conductor was stopped before being started again.
Mean
mean
117.786246
s
SD
112.204675
s
Max
Highest per-partition mean latency
max mean
222.52899
s
Min
Lowest per-partition mean latency
min mean
12.140874
s
Reached target arc
Did get_agent_activity_volatile peer reach their target arc in the moment before they were shutdown.
  • get_agent_activity_volatile_agent: uhCAk-BEQ2G2seHYIP3WG4VxtaawSI9ClXU11TYY1f2yrwdpRf3cP
p5
0
<
mean
0.38
SD
0.48
<
p95
1
  • get_agent_activity_volatile_agent: uhCAk-DSm9hH4O8kwAzOE__jL24sizpgay7WR3jN1Qg3Y6t0J1If8
p5
0
<
mean
0.15
SD
0.36
<
p95
1
  • get_agent_activity_volatile_agent: uhCAk-Otz3OADd43ihJfea3nxQx78m18xiuPj8KECTVir_A23Pkd_
p5
0
<
mean
0.75
SD
0.43
<
p95
1
  • get_agent_activity_volatile_agent: uhCAk-bUCkMqvy6Zk2eTQZEUdegbmZ9rpuWFQAPv3CJmknN3BrvFF
p5
0
<
mean
0.23
SD
0.42
<
p95
1
  • get_agent_activity_volatile_agent: uhCAk-lZ6sW74J1TjZTr99edfVGGnD6A69RtOs2fOZaFLZn0CqhjD
p5
0
<
mean
0.09
SD
0.29
<
p95
1
  • get_agent_activity_volatile_agent: uhCAk19T4sdyTu3ZMVlIh8rgvcedBk2Rax7RZGw6Rmmo7U1uWj6sg
p5
0
<
mean
0.38
SD
0.48
<
p95
1
  • get_agent_activity_volatile_agent: uhCAk1AwNBZjNllr_YNlgviGWz9Gn4I8TXWPXmcLfUiKkbNfI63lA
p5
0
<
mean
0.33
SD
0.47
<
p95
1
  • get_agent_activity_volatile_agent: uhCAk1gR97UrSJTMbmhJjQQNcnvgOE1ODDql6a3dXbFGt_f-hsM9Y
p5
0
<
mean
0.43
SD
0.49
<
p95
1
  • get_agent_activity_volatile_agent: uhCAk1iMR1iBRuswnBNK02T5vg-CAoe_vbgRg91PfBcJ5yvFtaTLR
p5
0
<
mean
0.33
SD
0.47
<
p95
1
  • get_agent_activity_volatile_agent: uhCAk1kkV-f61nBLb7zsnUSux6jLtdE7-Ok3Ut_9Rr5C-CbfWPhg_
p5
0
<
mean
0.33
SD
0.47
<
p95
1
  • get_agent_activity_volatile_agent: uhCAk1q7ihoPd47jTTetD4WQUKPpFqaIhtIq3QKc5hk0yGnvCDds1
p5
0
<
mean
0.08
SD
0.28
<
p95
1
  • get_agent_activity_volatile_agent: uhCAk20fCKQwDxW9FAKhZLYsVNUok4K2yv0IgPlEdRCHLSTgOvfAV
p5
0
<
mean
0.2
SD
0.4
<
p95
1
  • get_agent_activity_volatile_agent: uhCAk2AcPWs0W0G_9g_XhH67rTxN6jUPrS7m7QWN-mz2xa78u0zsx
p5
0
<
mean
0.44
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAk2pNpJNHuibp-Ynf0t_imYNSO-ikeJa0mYJht8lFtkhhkn1UE
p5
0
<
mean
0.64
SD
0.48
<
p95
1
  • get_agent_activity_volatile_agent: uhCAk3RT6yvJwq8-IkG7zZu5eHVugxPOlan9AQ8ug_F4vPuWyulmB
p5
1
mean
1
SD
0
p95
1
Not enough time data to show a trend.
  • get_agent_activity_volatile_agent: uhCAk3xnuMIsyJXW6Xs1tABAvoAyE97kWyMTaPTmATEcpojd-TWCR
p5
0
<
mean
0.62
SD
0.49
<
p95
1
  • get_agent_activity_volatile_agent: uhCAk49MZhENg5AxnFLPBW4Z3knSokyZ8-JO3REgCLUbznlVwGsnO
p5
0
<
mean
0.29
SD
0.46
<
p95
1
  • get_agent_activity_volatile_agent: uhCAk4KghAqFMOdL5wn6u4SXCAJyzOmWhuePHLSiO1HrdGmgsW3ae
p5
0
<
mean
0.5
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAk4NF-rgWLv7G0811V0cnC8LyzM4QFmzuyH4BRmlgIOCQyMayY
p5
0
<
mean
0.36
SD
0.48
<
p95
1
  • get_agent_activity_volatile_agent: uhCAk4bkYA9xFL22zK5PkEdqUMf4UhukgzcPlWsuXsZzDoP_Cxw_Y
p5
0
<
mean
0.5
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAk4hwd9caMVphghVoUlurEL7jzn-I6aD_IlOwDTtMy7NSaiGJN
p5
0
<
mean
0.46
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAk4zS5gos-yZTSSBg09SxoljN_H19rI4TW93Cy_5nXoHhO25MR
p5
0
<
mean
0.3
SD
0.46
<
p95
1
  • get_agent_activity_volatile_agent: uhCAk55mwFyrQAn1UPEvIruitlbhHV1Fmwsw5HGpD5U-X5KXCezGT
p5
0
<
mean
0.33
SD
0.47
<
p95
1
  • get_agent_activity_volatile_agent: uhCAk5UoZCqyTSdiR_p7M7euHH8sBt_Kvid83SQn-fzbzy1T3j74Y
p5
0
<
mean
0.25
SD
0.43
<
p95
1
  • get_agent_activity_volatile_agent: uhCAk5wjCnuk-BiMbAT-HGHgIzqXiw81QBE8y-3aMrUSI-K5ytN1M
p5
0
<
mean
0.33
SD
0.47
<
p95
1
  • get_agent_activity_volatile_agent: uhCAk67Xn8nWKmn6jAJMf8su7DpjAFh4NUTjmdoC1PNGFmxvM58VF
p5
0
<
mean
0.6
SD
0.49
<
p95
1
  • get_agent_activity_volatile_agent: uhCAk6an98OyLvBudxepo7cotb7n10Tm2-DceJWRZZX73XPoQvnJ6
p5
0
<
mean
0.33
SD
0.47
<
p95
1
  • get_agent_activity_volatile_agent: uhCAk6detNu9hEydAps_pu2NCUMez0yqJw3mgQCuoXIErYg4TQ4EQ
p5
0
<
mean
0.5
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAk6lE8MAs-hCtTRdqgdXtqnobOW97Yar4WBPFZN7c2D0ftp1n7
p5
0
<
mean
0.5
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAk7JdKYXfXSuZflTXz-d3HGnVsuhAAZLOT7u6bwd9Tqs9x9Kwl
p5
0
<
mean
0.6
SD
0.49
<
p95
1
  • get_agent_activity_volatile_agent: uhCAk8dDkmEgj1XVht1VhvcxEFwzeBT3koM1ngL0A7Ij3bk4Xe18X
p5
0
<
mean
0.15
SD
0.36
<
p95
1
  • get_agent_activity_volatile_agent: uhCAk8mrms1VlUzJ1fVQCaK-Cv1uMpjISIH6hcSdPHsC0RPP9UiB3
p5
0
<
mean
0.21
SD
0.41
<
p95
1
  • get_agent_activity_volatile_agent: uhCAk96kj3nSwRYyuy2UwsZgGtBfqPK7VFTjZHO4w8Ker-Ccuxxqu
p5
0
<
mean
0.5
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAk9pcj8IYCNXWMQWkQajyrz1XBwkVs4HO0AavW8dmDRQg-Daoh
p5
0
<
mean
0.42
SD
0.49
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkAMAJrD623remy9aY8DCfGG3LvIfiCGibE3IZ9WRbhLj6nJqK
p5
0
<
mean
0.56
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkAMIq9_moS256LiwRYe2hNEqatjOrZylx6nPQszLcvxscH0OG
p5
0
<
mean
0.27
SD
0.45
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkAPRUk3bzWATkkU6PB3IOCKRF6F5t7e5MVB68lHhIXH-VRGbF
p5
0
<
mean
0.62
SD
0.49
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkAuRGNtXwYmmPJJma7SzCpIIgsQfBqQPgV1UWn-zX7WXloXPy
p5
0
<
mean
0.19
SD
0.39
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkBM8qWp9mxkq5Irs1R7B26YJnHAegavhnu0hsdTDB63ziT-8k
p5
0
<
mean
0.58
SD
0.49
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkBU4aRt3rg_-xhxiLxP5JboThpATEsR9uCU4qIt3CkvBgHpqd
p5
0
<
mean
0.44
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkBfEeD9ymIEeLZuGbULpIFg1UKfAiNW4ZdtvwpnaoZksyxyKa
p5
0
<
mean
0.45
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkBvpkfUaRzxkW0l0wHNkCbl61JghQ04ZPapGwT4HmlcvmyZPR
p5
0
<
mean
0.83
SD
0.37
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkC3icimRsLcqtvKyMyT5AKcGiJJVl22PpMT9sv7JYKPlZtEoE
p5
0
<
mean
0.14
SD
0.35
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkCGNMtpY6gXFV7qYa0tqZnzjFNYFAhnZYYpXgAiPKGUT6TJyq
p5
0
<
mean
0.58
SD
0.49
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkC_gDt9EmOS7SWKw0u9Wy2MysIxkt2F2gH2TJBovk4RuPCggG
p5
0
<
mean
0.38
SD
0.49
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkCoqNfcXnFaNHIBndEbK8oyAblYAtuozieMH47HuzY5WZqDAU
p5
0
<
mean
0.36
SD
0.48
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkDSDoai9VJ_kcb5BLJt8c0BJndVGWlzr1vGsryD0fx20ZvPlW
p5
0
<
mean
0.29
SD
0.45
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkDa1yfnutXoitPgUeNCD9R9huRx3yRLygdQKMwrL20r2l4eHG
p5
0
<
mean
0.11
SD
0.31
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkDdre6BOkqgVaNC781jbfixWeBVuNUho-Rcve4CsZAqXBpZ4Z
p5
0
<
mean
0.22
SD
0.42
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkDtWf9UnHj-PNkL-paspwa8GzsZDZUXnI1W6O2hnnqQ-RE2Nw
p5
0
<
mean
0.42
SD
0.49
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkDvJ8ELxRur-bE1RE7eT6168XwUfntvuOh5fLzPCY5omqlR5A
p5
0
<
mean
0.45
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkEBC5csGKPC2sIAKtSDXbeZC5QuDxwGTrLnDQQye6QWoYo87V
p5
0
<
mean
0.4
SD
0.49
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkEu5-h31QLPaG7344DfY31oBGB39WbPgjO8wQr1cojjuY-EDB
p5
0
<
mean
0.44
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkF6dgQ7UuR6hOXgvnbq0PIZF0q1bb-NbMZZB5wBlj5bQA-CCh
p5
0
<
mean
0.44
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkFXS_JxS3WTGbxGCltc3T_k4Nfb9pDiIGGfXt75ZYX3ga9Mnx
p5
0
<
mean
0.33
SD
0.47
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkGQdJUSbwii9egafwNXU81wlN9IJDeELzBWdkqMwmYU8Riv9d
p5
0
<
mean
0.3
SD
0.46
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkIefyNIfLZV83zmecuOB6dAgvNdQ9YnNjvD8RTz6DEyE5zUJh
p5
0
<
mean
0.5
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkIoDExkVgidRkKO6xF_jo-xC99dpUSsA7zqwG-Ne1KR66pUEk
p5
0
<
mean
0.25
SD
0.43
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkJAOJH8Ql8ehgFXf0FWT7L3IuMciftgILx0vAMMSf9eyZkKRZ
p5
0
<
mean
0.11
SD
0.31
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkK2eTluXpDaOrmMJq1IqzSWfsB2WjRyrXRvRSQzFLAxU0ujiO
p5
0
<
mean
0.2
SD
0.4
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkLOKZRhdWEmE_pTvOJnFgXRtOrZIQ-nSIj6XgLm2qthGmMUG4
p5
0
<
mean
0.5
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkLZ7UhLCo21vIRhjN1WzzWSCkmVskzv6q6w2ryaJfhcrbRJ7M
p5
0
<
mean
0.62
SD
0.49
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkLoPJWeih782xYCtIs_GJaN5o_uVbOJMjDEWIdto8FLIJEJuZ
p5
0
<
mean
0.25
SD
0.43
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkLvbYkBlXT-vnq7kp0EuHewadUfkqqeu6ds-nvt6bAEeFE3iG
p5
0
<
mean
0.6
SD
0.49
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkM-QkKJ-3HWrbPs1q1p20q1YBzggnZ9fwYgheCv5TlstRSV_R
p5
0
<
mean
0.33
SD
0.47
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkNPgLROjs5VxRf9d-Qbsz3x25ZP9N4gpBfOYls-U8pq75Tkmr
p5
0
<
mean
0.58
SD
0.49
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkNUJdW1qlA1dME1oNTXyvvbmfVk6fgZt8gOhU_ToEYqfm2-WD
p5
0
<
mean
0.44
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkO1W8H0BfwjCznlwdXKBzvEfZT1nogOALjoHQQ10gyodeHwTV
p5
0
<
mean
0.07
SD
0.26
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkORbjuohmXx64oUAuczzR8GUb-O7V5UIvJePOFw_96NspwVDn
p5
0
<
mean
0.09
SD
0.29
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkPvS1qbj1asSFkLlzkY4MAvsHCmT06OY3O8yM8lOFukPBE3UO
p5
0
<
mean
0.25
SD
0.43
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkQiFH70T0fFaLBjR5Bpr98u5IVmdWv646er9FS64fMPJGNcz0
p5
0
<
mean
0.55
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkRKkOJg_cgl7YbcYeOQb-QBHj6qz3UKrvUQ_s6yo1GXuww26S
p5
0
<
mean
0.33
SD
0.47
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkROwWrP_aGcaf9VX0p8nLCtYAHXzjpUi3r-NmH6Xo5aDo1ZxY
p5
0
<
mean
0.42
SD
0.49
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkRVz7LYqINf29pXMFbLCls1xrXq08QWMs_6DpDgLLmZqRhIz3
p5
0
<
mean
0.2
SD
0.4
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkRjXQYmZH7IDz9Yj2XQBi4Xg7Zdnx2L3ua1610J3YkR9fDegY
p5
0
<
mean
0.19
SD
0.39
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkRt0EAdl7mXd-Dt7T6TyA8SKcUcsEpLaTEqjfvweZlb7HDE2a
p5
0
<
mean
0.33
SD
0.47
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkS3CO6-hZ5Hr7bd-BzP-7ksyl4Oz5LqO5YdpavkaFkr0mCzTo
p5
0
<
mean
0.25
SD
0.43
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkSHROASDAyXWMqLh1SohSzGSFPoXzdzs_ZsVyUECvFUidghK0
p5
0
<
mean
0.56
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkTF2SeJNLKe3zuWzLkMf7gf6seQMsEm0NQrQMNSX7Ljdja2fu
p5
0
<
mean
0.54
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkTM_oCO_LSVCAfp7N546dWIgs9NGUQVIXperodnQxEs0oJGAB
p5
0
<
mean
0.5
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkTRVxDzgpMV32SMUEF_B20PUv6vEcmpByjKHHu5bnLIY19wzU
p5
0
<
mean
0.5
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkTo7MebsK9Yz8UoKjs4E5djqLVv9N_abKqU8fw0wPhqcUMQxq
p5
0
<
mean
0.56
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkTrVQ1phY8n7rNwPDj_iBY-6qDCeb1WgIi07bvw_LZleFEqyf
p5
0
<
mean
0.56
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkTtECqaIM5bdjiXB8VS1oBBWwiboS4841bFmAXpnYA82UuKmN
p5
0
<
mean
0.36
SD
0.48
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkV5GbZgjMBOnhhnhCFU7qtSTmo-lg7aOeecDurOaiysvqdVlw
p5
0
<
mean
0.54
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkVBhsvB0R1RMYy85-hrXhdCbMkCieDCE4HdhjLfj-8PUrm8nb
p5
0
<
mean
0.22
SD
0.42
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkVSTXEVbLXJjxjvGw_GUotZIbC5scSyzPjPlupEWLj8QjVfuc
p5
0
<
mean
0.57
SD
0.49
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkW3pc3yAvIkfj1O9W4NiMNDMA9W6N6zbfojuK6T1xp24-0V_L
p5
0
<
mean
0.56
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkW5gHELpB5LyPiT6AmufUpmn3TPFr5fYEKMTXCkpkulVHxa0G
p5
0
<
mean
0.38
SD
0.48
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkXEd6qvpjJ9_u7u22sMK7ENFcl9Lh6sZaKWcNlKU6Qxljm__9
p5
0
<
mean
0.33
SD
0.47
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkXxRT20sk93QMzSdodKeUJs_W9Vkn0L4EoWC59psIW4vSElsM
p5
0
<
mean
0.23
SD
0.42
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkYnYcPc86LW38xm_EHylK5bqP9nf1m8dxTYtQMHjjxbA8JMsV
p5
0
<
mean
0.23
SD
0.42
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkZAe-Ie7xb-tY9DwiIU_NPPmPz85IkC4_xuaSHuQ7y1IzTeQ_
p5
0
<
mean
0.4
SD
0.49
<
p95
1
  • get_agent_activity_volatile_agent: uhCAk_hEgQoInpB01m_u8TKZRhFabBaFA5oJVCuQMP3SJDLryDFHX
p5
0
mean
0
SD
0
p95
0
  • get_agent_activity_volatile_agent: uhCAkavM5UFSLLq4v_bduCLokRsK_2WZnOoHfA6jQItgWyEa8_Hd5
p5
0
<
mean
0.08
SD
0.28
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkbRLiOP4EQkYtakbrsgsf0kkhm0xAFIyUZOckFQ_UiG74VhEW
p5
0
<
mean
0.56
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkbbWMDjeM2H_1u0p7WWOirzwkYvZ65RjLEaah03A-aAMqPktx
p5
0
<
mean
0.56
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkbe_kIeUm59RH4qTnZjGCcqQg3MGqfnu5e2mGaHCDFdSCwlyk
p5
0
<
mean
0.55
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkblHauJAXw_OnRSi3YR1Qf-QC4gmY64zEaSba3-a18Xo6wRPU
p5
0
<
mean
0.07
SD
0.25
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkcPxRnSwnSElnmh0DLeq1HXAII45Y_sPdBxgvWj6yG9RMckgG
p5
0
<
mean
0.25
SD
0.43
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkcwSVTFEDUOsF1ZXpRD-8hnFVfzd6hAkfvvxRA3W7wN9k0VKT
p5
0
<
mean
0.67
SD
0.47
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkdFiWLqdMzvbUvbB0wN7dDeFU1JyIV4ITyfkjd4NMHI5Pjs5o
p5
0
<
mean
0.36
SD
0.48
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkdI5TAbJGO4R4X54auJYGtTT5ohAW3ktGdgL4TtA6Izs0DzTG
p5
0
<
mean
0.4
SD
0.49
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkdbgY_xdeOiq6m4GQ4pNWwkpYw6XR_DTijFdivKunvpllruQo
p5
0
<
mean
0.57
SD
0.49
<
p95
1
  • get_agent_activity_volatile_agent: uhCAke5ZJ4u-gC7JkCmNoXYEmPDRTDz5nOafxoa8KDYXXLGcDGe55
p5
0
<
mean
0.25
SD
0.43
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkeAJ7ZyIloIOuYQrTUMWo0ZpZw4-XGmhCHI7gb5kmu_unx4BG
p5
0
<
mean
0.67
SD
0.47
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkfN-WEbhrwX5KUn1YkuZ99QJEXyxlRR4d46VfmFScTrIdTw0O
p5
0
<
mean
0.33
SD
0.47
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkfZ63D0z7CiUObNWlZdUnxtPnQJhiNOgVl7K0l3CNZMN1aDwE
p5
0
<
mean
0.31
SD
0.46
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkfpZbcE-RpGeveti_qRnoo0KPhhESp3ff-NAmOpyDtmzsz8yG
p5
0
<
mean
0.57
SD
0.49
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkg2kwaUlUyu3xyFccVxEieegemmNW2UT5L87XwM23yhqPz1Ul
p5
0
<
mean
0.45
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkgLE7vjWxbElgBHvF5TJSyPn0teEi7lQrYuvnPFj4Vhuru9pi
p5
0
<
mean
0.08
SD
0.27
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkgzNTc3-XYu350KDipR5_lUvR-QpMt0F91K1Jy-s81WYF9LQL
p5
0
<
mean
0.53
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkhF_bYyRXxJTlNCJd1ExS_htdfGvVArL-8eFV7zto7k5nvrJh
p5
0
<
mean
0.33
SD
0.47
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkhZOFJMzh4ITfGNIbPd5AMIxK_WUUcWJlwqE-rmTfDlJP3Xoc
p5
0
<
mean
0.25
SD
0.43
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkhjLxToI3GdJqnaZUpxTpG_S1EHYXXhDPNifWV1CyvWaUH6j4
p5
0
<
mean
0.22
SD
0.42
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkhw_FFLEVNWSKxbIF3WM70fWBOof2OCIXW-3HeGSG0wdii8EX
p5
0
<
mean
0.55
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkicLJ53OWoyrur3UXzxgnQCsdlGXpXiUuKUTa9MjOAVgXZg2d
p5
0
<
mean
0.4
SD
0.49
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkilTdWX2urNr1djo8yY0RkSujc_92XOV33dstzrp9H11Dw2WV
p5
0
<
mean
0.13
SD
0.33
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkj2LcHuce4V8jpY6ISiF9Y8Yrextwp67PYR5SDMUcapzkgcew
p5
0
<
mean
0.33
SD
0.47
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkjAh1ipjsH3Vx146IZiIO05vSVPbLOoR_3p0T7x5QbbiKi3Lv
p5
0
<
mean
0.36
SD
0.48
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkjV96gA82RKtGXY2lbm6cQuMTUBvxtlahnJjDWewBahIkZodi
p5
0
<
mean
0.2
SD
0.4
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkkEeW0QSPz3IIGJJoIyqEuOjCnTpJG66loC5k9nyJGVJf3MPh
p5
0
<
mean
0.4
SD
0.49
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkkHuV5NEDY2slUf5VSo8ef5ScUJf11icH5bow_vUT4p1lxwun
p5
0
mean
0
SD
0
p95
0
  • get_agent_activity_volatile_agent: uhCAkkreeWkqOvk8I-sj0JJg0VMr9rtjymu1rSJJBavxz2xwGiIhb
p5
0
<
mean
0.36
SD
0.48
<
p95
1
  • get_agent_activity_volatile_agent: uhCAklBv_KWjnyS_YIxpPANHUPCUOlQcNB_NDviBgVW_F0DiwORmT
p5
0
<
mean
0.54
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkldZvrs7EPrjBPQ1rKRPYPSwWgF3eETin0eK_p2Sza3pmTaIZ
p5
0
<
mean
0.27
SD
0.44
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkldwhDWs5-1stc7WQwNploPyJt0ifBPtJnk5UNJua2nJaQ7rM
p5
0
<
mean
0.27
SD
0.45
<
p95
1
  • get_agent_activity_volatile_agent: uhCAklgfb-AP_1JH4wAJqlpzEDkfH1uVfsVaBoqW8ketx4dpjFnA6
p5
0
<
mean
0.46
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAklm4lbDAo310tQIyn_beT3GCcCtBjHaa9TlgkVJYcdUtiYMd3
p5
0
<
mean
0.13
SD
0.34
<
p95
1
  • get_agent_activity_volatile_agent: uhCAklzKSDISvcppiJTSfwrvfGrNE9-B0bX4-H7au6KdLg3MVTV13
p5
0
<
mean
0.33
SD
0.47
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkm43vjWZ--ii9Mu0TiDGf4JqKavtlwW7lokIsGlPMe1xQoGO_
p5
0
<
mean
0.5
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkmVLGNo50jLiJaj72f4C9WzKcsKuzHs1HTYfb4muvNXA-Mgco
p5
0
<
mean
0.55
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkmxS7Yfx2WnI2pUNrqjvNuUmEvwCsC0OJSR9RIGOVBi1axBck
p5
0
<
mean
0.11
SD
0.31
<
p95
1
  • get_agent_activity_volatile_agent: uhCAknbtfq2yv3xZbyKpcVmXw71LT6QDbwA8IdLGom656gQTm9H6x
p5
0
<
mean
0.69
SD
0.46
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkoHr4Z07EbuZdWFGwQ-GXxq5a9jZLyUQ-YolnLWM-Zo5kgdzN
p5
0
<
mean
0.09
SD
0.29
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkoUhdV2peSwisfikDl06-wXqD2jIm_MMNtA9ZEYHYjFQe9Gop
p5
0
<
mean
0.5
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkp8VMwC5G5XT3xvV_iqKfoth8Z24UzvYJ7_VuiPi9T6fOtVwB
p5
0
<
mean
0.55
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkpGOYPpXzRaaCkMuS2DQw3IKnPoFrsnNrxQLWzGPaDHnhsQ2p
p5
0
mean
0
SD
0
p95
0
  • get_agent_activity_volatile_agent: uhCAkpHG85N8QXoHKjP9S2rDoPP4LbWJGJ7SbCi3puL-4YMCiEg_C
p5
0
<
mean
0.47
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkpnQ5QL2K-ITf-QIQd9iqekV8NfPSrEHH3--1G2UPvrZHghUo
p5
0
<
mean
0.46
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkpxCP_DV5gNm-_12WHRV1AE3dBAFhBQBzfmksebYcmkNgDMBR
p5
0
<
mean
0.42
SD
0.49
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkq1PExUnUrPJw9-yn1A7yzMxQXBYTfAJ4qo8eNY2EBuh0Kv2S
p5
0
<
mean
0.6
SD
0.49
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkqJSDBXloDLSpqMNkj6KX9HnxOuBfnrczw3YrCBDrGnLQaAaI
p5
0
<
mean
0.4
SD
0.49
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkqkp1F6VpBWvfmOzqJ_uEIgd8BCeudXmcJleZLTT_zLbgdk4h
p5
0
<
mean
0.22
SD
0.42
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkqqB4uGY5Q6Zs4dd1JDl6dAPt7EiooY_j3kqG48xzZ76jGIG0
p5
0
<
mean
0.25
SD
0.43
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkr7sETA16TWYQpn_eaRjZIybb2835dKHvVgj-p3VIZwuv_TaR
p5
0
<
mean
0.2
SD
0.4
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkrktVwKBopL9gU-50tj_eh7BfYaqTXEaRquDHbmqsphrbTk2F
p5
0
<
mean
0.6
SD
0.49
<
p95
1
  • get_agent_activity_volatile_agent: uhCAks319LvZ5WMt9lIffkTMSP6v_9A2Y4XPvT8Mv4zpBwpKM5L-o
p5
0
<
mean
0.67
SD
0.47
<
p95
1
  • get_agent_activity_volatile_agent: uhCAksq0JwBM3BUxeP3HPSBmD34se7CBfWePNn-BqNGnwPR9BCKRw
p5
0
<
mean
0.55
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAksy1s5lgjv-6Q16YRjD7DJv7PT8Dm2sy2YFp-vADcbV9uyANl
p5
0
<
mean
0.21
SD
0.41
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkt-4Tr31ulogz9PD8zmgY67negZX61HjmAp9Aj_dPTngNRJy5
p5
0
<
mean
0.25
SD
0.43
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkt3h93BT5GwgccqCNWcUK4y6JR13lP7jJQS_O-y7ydcvkYMOu
p5
0
<
mean
0.22
SD
0.42
<
p95
1
  • get_agent_activity_volatile_agent: uhCAktBQS1koYJWRxxZA11JeTk6gQ7daXh7sFvPEcZoPHUR7HfiXr
p5
0
<
mean
0.36
SD
0.48
<
p95
1
  • get_agent_activity_volatile_agent: uhCAktFA7fGLwo15fNUP1nwMU5Xejy2ZZribj65eH1nUy5oB_i6vG
p5
0
<
mean
0.2
SD
0.4
<
p95
1
  • get_agent_activity_volatile_agent: uhCAktVR0CfwreljXUjGz18Zs-M1yuCLA5LZ_sGGxidjiDZW9euWL
p5
0
<
mean
0.21
SD
0.41
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkthrVo-0fz83aIMQHjoVBpvY1a6fixcFS1QX6voJMblyU_8L_
p5
0
<
mean
0.6
SD
0.49
<
p95
1
  • get_agent_activity_volatile_agent: uhCAku265VJpLmd8poxtrMwZqqFYc4WuHLK4qFzBLn7zi4i-Hh2Lb
p5
0
<
mean
0.3
SD
0.46
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkuN2qVmVRCCvHorIj93yvibsEF8XboHantl9_3dk5DCRAobpR
p5
0
<
mean
0.58
SD
0.49
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkv7Lq6OBv09WXgiYXjm2WE7EU_lBBXCFVb4OT1fOdr2YyNqm5
p5
0
<
mean
0.22
SD
0.42
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkvAaYJuotdOVA1RIVIYrWgTvfIBq53dX1rQ4HUoqjJUOSjadv
p5
0
<
mean
0.11
SD
0.31
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkvLOqwiew382oRvtpQkTKBAD7cXNUBtYMaGS6SvZBakP3uzuO
p5
0
<
mean
0.44
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkvkJHL2rF6yrLXl1Fnlzp2NJ3XhX18kBb89veG1tYwJxo_col
p5
0
<
mean
0.27
SD
0.45
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkw8XwqsypHA_c-C0QXpyJF1QUNELaW7LFw3AOkEipSgyWVqyj
p5
0
<
mean
0.57
SD
0.49
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkwDHRet96czDz7Pp43Cfgnek0pA8R3-IUjnBFyImL2m2pTds3
p5
0
<
mean
0.42
SD
0.49
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkwSadAW-iQluV2hO9ppeabsRU8UcpTY4O9Xva_xXIo6uHFwFh
p5
0
<
mean
0.07
SD
0.26
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkwlzJml6Z0hxZErKFuN1A5Rh2G2d1pZzOPDgGwOAhmPqT80wT
p5
0
<
mean
0.11
SD
0.31
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkwsvuiCRAOKrhp2p-0GDtiUIEK_2BN8pljJHmS6WKdrTNMw2f
p5
0
<
mean
0.4
SD
0.49
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkx3WcbbnE_seDanU268PaC2dtHZDBpMe0bcQ-6rM9n2d75q5Z
p5
0
<
mean
0.71
SD
0.45
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkxGtV4dKNgwPZTIEjPk3Cwyw8SF77HuLwL-pEZznanaFYSgFI
p5
0
<
mean
0.43
SD
0.49
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkxtdB8TZDh9bmcoviN06FQdWNkyozNVp9rltFpdK4cATiECaB
p5
0
<
mean
0.5
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkxvrQZGB_O-fG04a_ZwtskcP3swGQpM8mAVRbJIywc8YK8son
p5
0
<
mean
0.38
SD
0.49
<
p95
1
  • get_agent_activity_volatile_agent: uhCAky3u3ZxLOzeci0rKpGnHSAtWlwDaJxEy6U6Bbm8qtyFUikBi6
p5
0
<
mean
0.33
SD
0.47
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkyA3pGJKcJO4Q5afcyPJup-SY9wPDMHbnpyuSoMWgYvM7ldyW
p5
0
<
mean
0.33
SD
0.47
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkyIbexKppL3rUNWTPG5ISd-LPS8k7yjHaD0RX870uRTWlIfqV
p5
0
<
mean
0.44
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkzLIwHwro8mI-Zw6pHx6xPOd1sSYahRmk_DddhW6qt46C35wr
p5
0
<
mean
0.42
SD
0.49
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkzmMOr7zHXNQCsY6fHr6Tryfdl_F6VTrab5LdpjYL3di2QK7z
p5
0
<
mean
0.21
SD
0.41
<
p95
1
Error count
The number of errors accumulated.
1794
Holochain Metrics
Hidden by default. Click to toggle visibility.
Cascade duration
Time taken to execute a cascade (get) query inside Holochain.
mean
0.001347
s
SD
0.00516
s
p50
0.001507
s
<
p95
0.011491
s
<
p99
0.030062
s
Cascade fetch errors
Count of network fetch errors during cascade calls.
total
52
over 2569.085818499s
mean rate
1455.46
/s
std
23769.6
/s
p5
0
/s
p95
3401.07
/s
peak
2.59067358e+06
/s
Zome call duration: 'agent_activity::announce_write_behaviour'
Duration of this zome call as measured by Holochain internally.
mean
0.00803
s
SD
0.003533
s
p50
0.008019
s
<
p95
0.012289
s
<
p99
0.028727
s
Zome call duration: 'agent_activity::create_sample_entry'
Duration of this zome call as measured by Holochain internally.
mean
0.007683
s
SD
0.001478
s
p50
0.008027
s
<
p95
0.009742
s
<
p99
0.011267
s
Zome call duration: 'agent_activity::get_agent_activity_full'
Duration of this zome call as measured by Holochain internally.
mean
0.059966
s
SD
5.058855
s
p50
0.665067
s
<
p95
4.645507
s
<
p99
32.553195
s
Zome call duration: 'agent_activity::get_random_agent_with_write_behaviour'
Duration of this zome call as measured by Holochain internally.
mean
0.009225
s
SD
1.294657
s
p50
0.552317
s
<
p95
3.857443
s
<
p99
6.458669
s
WASM call duration: 'agent_activity::announce_write_behaviour'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.006668
s
SD
0.003249
s
p50
0.006534
s
<
p95
0.008817
s
<
p99
0.02694
s
WASM call duration: 'agent_activity::create_sample_entry'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.006046
s
SD
0.001189
s
p50
0.006378
s
<
p95
0.007613
s
<
p99
0.008812
s
WASM call duration: 'agent_activity::get_agent_activity_full'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.405801
s
SD
58.86007
s
p50
1.485697
s
<
p95
89.89911
s
<
p99
359.89298
s
WASM call duration: 'agent_activity::get_random_agent_with_write_behaviour'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.015236
s
SD
6.633063
s
p50
0.566846
s
<
p95
7.07423
s
<
p99
30.013405
s
WASM call duration: 'agent_activity_integrity::entry_defs'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.001867
s
SD
0.000468
s
p50
0.002003
s
<
p95
0.002321
s
<
p99
0.003014
s
Host function call duration: '__hc__agent_info_1'
Duration of this host function call invoked from within WASM.
mean
6.3e-05
s
SD
6.6e-05
s
p50
5.1e-05
s
<
p95
0.000145
s
<
p99
0.000562
s
Host function call duration: '__hc__create_1'
Duration of this host function call invoked from within WASM.
mean
0.000667
s
SD
0.000157
s
p50
0.000674
s
<
p95
0.000898
s
<
p99
0.00107
s
Host function call duration: '__hc__create_link_1'
Duration of this host function call invoked from within WASM.
mean
0.000732
s
SD
0.000723
s
p50
0.000617
s
<
p95
0.00115
s
<
p99
0.006347
s
Host function call duration: '__hc__get_agent_activity_1'
Duration of this host function call invoked from within WASM.
mean
0.404009
s
SD
58.86005
s
p50
1.483832
s
<
p95
89.897287
s
<
p99
359.890455
s
Host function call duration: '__hc__get_links_1'
Duration of this host function call invoked from within WASM.
mean
0.011472
s
SD
6.63306
s
p50
0.561719
s
<
p95
7.070485
s
<
p99
30.006989
s
Host function call duration: '__hc__random_bytes_1'
Duration of this host function call invoked from within WASM.
mean
1.6e-05
s
SD
1.1e-05
s
p50
2.5e-05
s
<
p95
4.6e-05
s
<
p99
6.2e-05
s
Host function call duration: '__hc__zome_info_1'
Duration of this host function call invoked from within WASM.
mean
0.003164
s
SD
0.000869
s
p50
0.003118
s
<
p95
0.004201
s
<
p99
0.00578
s
Post-commit duration
Time spent executing post-commit workflows.
mean
0.000577
s
SD
0.000184
s
p50
0.000608
s
<
p95
0.00079
s
<
p99
0.001159
s
Conductor uptime
Conductor uptime gauge. A drop in the trend indicates a restart during the run.
p5
8.17
s
<
mean
363.96
s
SD
481.82
s
<
p95
1545.43
s
Integrated ops
Count of DHT ops integrated since the conductor started. Resets on restart.
total
25740
over 2595.227371931s
mean rate
1.92391529e+06
/s
std
4.354641387e+07
/s
p5
0
/s
p95
3.84175497e+06
/s
peak
4.78880675818e+09
/s
Integration delay
Delay between an op being stored and being integrated. High values indicate the pipeline is falling behind.
mean
96.564597
s
SD
185.703069
s
p50
28.33668
s
<
p95
550.102422
s
<
p99
859.154948
s
Validation attempts per op
Number of validation attempts required per op. Values consistently above 1 indicate retries.
mean
1.457531
s
SD
2.668757
s
p50
1.524484
s
<
p95
5.8
s
<
p99
16
s
App validation workflow duration
Time spent running the app validation workflow.
mean
4.892361
s
SD
8.481744
s
p50
0.077118
s
<
p95
21.232997
s
<
p99
40.686785
s
Countersigning workflow duration
Time spent running the countersigning workflow.
mean
0.005767
s
SD
0.005726
s
p50
0.004637
s
<
p95
0.013721
s
<
p99
0.021621
s
Integrate DHT ops workflow duration
Time spent running the integration workflow.
mean
0.019164
s
SD
0.027002
s
p50
0.006225
s
<
p95
0.040556
s
<
p99
0.074592
s
Publish DHT ops workflow duration
Time spent running the publish workflow.
mean
1.158308
s
SD
1.403328
s
p50
0.011435
s
<
p95
3.616322
s
<
p99
7.009642
s
System validation workflow duration
Time spent running the sys validation workflow.
mean
19.125214
s
SD
43.209069
s
p50
12.623698
s
<
p95
122.191908
s
<
p99
214.47376
s
Validation receipt workflow duration
Time spent running the validation receipt workflow.
mean
0.842816
s
SD
32.868953
s
p50
0.731066
s
<
p95
63.775864
s
<
p99
201.859633
s
Authored DB connection use time
Time spent holding authored database connections.
mean
0.003375
s
SD
0.001601
s
p50
0.000804
s
<
p95
0.004807
s
<
p99
0.007312
s
DHT DB connection use time
Time spent holding DHT database connections.
mean
0.003455
s
SD
0.001684
s
p50
0.00089
s
<
p95
0.005292
s
<
p99
0.008107
s
Conductor DB connection use time
Time spent holding conductor database connections.
mean
0.000243
s
SD
0.000173
s
p50
0.000165
s
<
p95
0.000401
s
<
p99
0.000824
s
Cache DB connection use time
Time spent holding cache database connections.
mean
0.000179
s
SD
0.00026
s
p50
0.000181
s
<
p95
0.000495
s
<
p99
0.001449
s
WASM DB connection use time
Time spent holding WASM database connections.
mean
0.023648
s
SD
0.019732
s
p50
0.026818
s
<
p95
0.053858
s
<
p99
0.068805
s
Peer meta store DB connection use time
Time spent holding peer meta store database connections.
mean
0.000138
s
SD
0.000219
s
p50
0.000244
s
<
p95
0.000733
s
<
p99
0.001014
s
Write transaction duration
Duration of exclusive write transactions across all databases.
mean
0.004315
s
SD
0.066505
s
p50
0.002072
s
<
p95
0.198652
s
<
p99
0.320835
s
Lair keystore: signing
Duration of Ed25519 signing requests to the Lair keystore.
mean
0.000315
s
SD
0.000481
s
p50
0.00052
s
<
p95
0.001366
s
<
p99
0.002459
s
P2P request roundtrip: 'get'
Time spent sending a get request and awaiting its response.
mean
1.801555
s
SD
28.524086
s
p50
1.038788
s
<
p95
60.002562
s
<
p99
60.004348
s
P2P request roundtrip: 'get_links'
Time spent sending a get_links request and awaiting its response.
mean
1.176593
s
SD
4.679464
s
p50
0.961507
s
<
p95
3.57142
s
<
p99
7.054527
s
P2P request roundtrip: 'get_agent_activity'
Time spent sending a get_agent_activity request and awaiting its response.
mean
1.100995
s
SD
27.911996
s
p50
0.984656
s
<
p95
60.001613
s
<
p99
149.742272
s
P2P request roundtrip: 'send_validation_receipts'
Time spent sending a send_validation_receipts request and awaiting its response.
mean
0.899773
s
SD
29.602262
s
p50
0.705763
s
<
p95
60.001685
s
<
p99
60.004757
s

Zero-Arc Create Data

A zero-arc/full-arc mixed scenario where zero-arc nodes create data and full-arc nodes read the data. The scenario has two roles:
  • zero: A zero-arc conductor that just creates entries with a timestamp field. Those entries are linked to a known base hash so that full-arc nodes can retrieve them.
  • full: A full-arc conductor that reads the entries created by the zero-arc node(s) and records the time lag between when the entry had been created and when it was first discovered.
Started
Mon, 13 Apr 2026 15:36:12 UTC
Peer count
250
Peer count at end
62250
Behaviours
  • full (1 agent)
  • zero (1 agent)
Holochain version
0.6.1-rc.4
Wind Tunnel version
0.6.0
Run ID
zero_arc_create_data--high_zero_24337090660_1
Create rate
The number of timed entries created by the zero-arc node(s) per second.
Mean
mean
67.16
/s
Max
Highest per-partition mean rate
max
209
/s
Min
Lowest per-partition mean rate
min
19.77
/s
Sync lag timing
For each entry, the time lag between when it was created and when the full-arc node could read it via the get_timed_local_entries zome function.
Mean
mean
508.132091
s
SD
69.695913
s
Max
Highest per-partition mean latency
max mean
884.824374
s
Min
Lowest per-partition mean latency
min mean
125.912352
s
Sync lag rate
The number of entries per second received by full nodes.
Mean
mean
50.01
/s
Max
Highest per-partition mean rate
max
387
/s
Min
Lowest per-partition mean rate
min
0
/s
Open connections
The number of currently open connections to other conductors.
full-arc
p5
18
<
mean
55.05
SD
37.27
<
p95
142
zero-arc
p5
6
<
mean
41.09
SD
36.08
<
p95
144
Error count
The number of errors accumulated across all nodes.
21
Holochain Metrics
Hidden by default. Click to toggle visibility.
Cascade duration
Time taken to execute a cascade (get) query inside Holochain.
mean
0.011412
s
SD
0.009847
s
p50
0.011314
s
<
p95
0.03343
s
<
p99
0.044475
s
Cascade fetch errors
Count of network fetch errors during cascade calls.
total
3
over 740.016326652s
mean rate
5.41
/s
std
7.15
/s
p5
0
/s
p95
15.36
/s
peak
15.37
/s
Zome call duration: 'timed::created_timed_entry'
Duration of this zome call as measured by Holochain internally.
mean
0.010386
s
SD
0.00353
s
p50
0.011775
s
<
p95
0.017064
s
<
p99
0.019638
s
Zome call duration: 'timed::get_timed_entries_local'
Duration of this zome call as measured by Holochain internally.
mean
0.038918
s
SD
0.096278
s
p50
0.046213
s
<
p95
0.254906
s
<
p99
0.489965
s
WASM call duration: 'timed::created_timed_entry'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.008934
s
SD
0.003014
s
p50
0.010192
s
<
p95
0.014726
s
<
p99
0.016876
s
WASM call duration: 'timed::get_timed_entries_local'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.037383
s
SD
0.09573
s
p50
0.04355
s
<
p95
0.252995
s
<
p99
0.486655
s
WASM call duration: 'timed_integrity::entry_defs'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.001578
s
SD
0.000576
s
p50
0.00181
s
<
p95
0.002692
s
<
p99
0.003559
s
Host function call duration: '__hc__create_1'
Duration of this host function call invoked from within WASM.
mean
0.000645
s
SD
0.000242
s
p50
0.000724
s
<
p95
0.001055
s
<
p99
0.001283
s
Host function call duration: '__hc__create_link_1'
Duration of this host function call invoked from within WASM.
mean
0.000641
s
SD
0.000257
s
p50
0.000718
s
<
p95
0.001102
s
<
p99
0.001388
s
Host function call duration: '__hc__get_1'
Duration of this host function call invoked from within WASM.
mean
0.013725
s
SD
0.014543
s
p50
0.014048
s
<
p95
0.042718
s
<
p99
0.075292
s
Host function call duration: '__hc__get_links_1'
Duration of this host function call invoked from within WASM.
mean
0.007773
s
SD
0.008469
s
p50
0.011357
s
<
p95
0.024412
s
<
p99
0.037921
s
Host function call duration: '__hc__zome_info_1'
Duration of this host function call invoked from within WASM.
mean
0.002844
s
SD
0.001304
s
p50
0.003236
s
<
p95
0.005087
s
<
p99
0.008605
s
Post-commit duration
Time spent executing post-commit workflows.
mean
0.00074
s
SD
0.000388
s
p50
0.00085
s
<
p95
0.001279
s
<
p99
0.0016
s
Conductor uptime
Conductor uptime gauge. A drop in the trend indicates a restart during the run.
p5
45.98
s
<
mean
455.69
s
SD
262.79
s
<
p95
865.76
s
Integration delay
Delay between an op being stored and being integrated. High values indicate the pipeline is falling behind.
mean
21.024495
s
SD
24.948184
s
p50
20.72973
s
<
p95
46.413354
s
<
p99
147.881876
s
Validation attempts per op
Number of validation attempts required per op. Values consistently above 1 indicate retries.
mean
1.999663
s
SD
0.2549
s
p50
1.999576
s
<
p95
1.999862
s
<
p99
2.001514
s
App validation workflow duration
Time spent running the app validation workflow.
mean
0.091125
s
SD
7.494437
s
p50
0.108181
s
<
p95
3.167305
s
<
p99
45.153248
s
Countersigning workflow duration
Time spent running the countersigning workflow.
mean
0.004909
s
SD
0.00484
s
p50
0.003874
s
<
p95
0.014022
s
<
p99
0.025382
s
Integrate DHT ops workflow duration
Time spent running the integration workflow.
mean
0.022547
s
SD
0.019809
s
p50
0.027535
s
<
p95
0.062374
s
<
p99
0.086405
s
Publish DHT ops workflow duration
Time spent running the publish workflow.
mean
0.92383
s
SD
5.149046
s
p50
0.926462
s
<
p95
15.43936
s
<
p99
25.855285
s
System validation workflow duration
Time spent running the sys validation workflow.
mean
0.060518
s
SD
3.973842
s
p50
0.068066
s
<
p95
0.144104
s
<
p99
3.388101
s
Validation receipt workflow duration
Time spent running the validation receipt workflow.
mean
0.135184
s
SD
8.039543
s
p50
0.144328
s
<
p95
1.105994
s
<
p99
42.68305
s
Authored DB connection use time
Time spent holding authored database connections.
mean
0.009715
s
SD
0.005659
s
p50
0.008864
s
<
p95
0.017993
s
<
p99
0.020225
s
DHT DB connection use time
Time spent holding DHT database connections.
mean
0.003046
s
SD
0.002085
s
p50
0.003514
s
<
p95
0.006409
s
<
p99
0.00692
s
Conductor DB connection use time
Time spent holding conductor database connections.
mean
0.000307
s
SD
0.00012
s
p50
0.000354
s
<
p95
0.000515
s
<
p99
0.000591
s
Cache DB connection use time
Time spent holding cache database connections.
mean
0.001273
s
SD
0.003301
s
p50
0.000266
s
<
p95
0.002484
s
<
p99
0.017788
s
WASM DB connection use time
Time spent holding WASM database connections.
mean
0.000306
s
SD
0.000419
s
p50
0.000246
s
<
p95
0.000456
s
<
p99
0.003026
s
Peer meta store DB connection use time
Time spent holding peer meta store database connections.
mean
0.000185
s
SD
9.3e-05
s
p50
0.000227
s
<
p95
0.00034
s
<
p99
0.00047
s
Write transaction duration
Duration of exclusive write transactions across all databases.
mean
0.004081
s
SD
0.107424
s
p50
0.003612
s
<
p95
0.299289
s
<
p99
0.441532
s
Lair keystore: signing
Duration of Ed25519 signing requests to the Lair keystore.
mean
0.000781
s
SD
0.007086
s
p50
0.000588
s
<
p95
0.004663
s
<
p99
0.035459
s
P2P request roundtrip: 'get'
Time spent sending a get request and awaiting its response.
mean
3.553085
s
SD
28.169898
s
p50
1.599712
s
<
p95
60.138407
s
<
p99
65.035619
s
P2P request roundtrip: 'send_validation_receipts'
Time spent sending a send_validation_receipts request and awaiting its response.
mean
0.381415
s
SD
28.136391
s
p50
0.819384
s
<
p95
60.080877
s
<
p99
60.191197
s
Host Metrics
User CPU usage
CPU usage by user space
36.87
%
Network receive rate (primary)
Rate of bytes received on primary network interface
p5
1.60
KiB/s
<
mean
422.64
KiB/s
SD
391.17
KiB/s
<
p95
1.13
MiB/s
Network send rate (primary)
Rate of bytes sent on primary network interface
p5
136
B/s
<
mean
68.20
KiB/s
SD
155.89
KiB/s
<
p95
259.20
KiB/s
Total bytes received
Total bytes received on primary network interface
count
111.48
GiB
mean
3.24
MiB/s
Total bytes sent
Total bytes sent on primary network interface
count
17.98
GiB
mean
534.44
KiB/s
CPU spike anomaly
CPU spike anomaly was detected
Detected

Warning CPU p99 reached 91.8%

Memory leak anomaly
Memory leak anomaly was detected
Detected

Warning Memory growing at 389.72 MiB/s

Disk full anomaly
Disk full anomaly was detected
NotDetected
Swap thrashing anomaly
Swap thrashing anomaly was detected
NotDetected
System overload anomaly
System overload anomaly was detected
Detected

Warning 9% of hosts overloaded (load5/ncpus > 1.0)

Additional Host Metrics
Hidden by default. Click to toggle visibility.
CPU usage
Total CPU usage and kernel CPU usage
Total
p5
0.96
%
<
mean
50.88
%
SD
29.58
%
<
p95
86
%
System
14.01
%
CPU percentiles
CPU usage percentiles
p50
59.46
%
p95
86.00
%
p99
91.83
%
CPU usage above 80%
Number of hosts above 80% CPU and mean time spent above threshold for those hosts
count
208
hosts
mean time
0.21
s
Memory used percentage
Percentage of memory used
p5
7.13
%
<
mean
11.14
%
SD
4.42
%
<
p95
15.68
%
Memory available percentage
Percentage of available memory
p5
84.32
%
<
mean
88.86
%
SD
4.42
%
<
p95
92.87
%
Max host memory used
Maximum memory usage percentage across all hosts
max
77.27
%
Max host swap used percentage
Maximum swap space usage percentage across all hosts
max
0.14
%
Memory growth rate
Rate of memory growth over time
growth
389.72
MiB/s
Disk read throughput
Disk read throughput in MiB/s
0.13
MiB/s
Disk write throughput
Disk write throughput in MiB/s
309.32
MiB/s
Disk space utilization risk
Number of hosts nearing disk space capacity by mount point
Mount Point /
0/228
hosts
Mount Point /efi-boot
0/5
hosts
Mount Point /etc/hostname
0/10
hosts
Mount Point /etc/hosts
0/10
hosts
Mount Point /etc/resolv.conf
0/10
hosts
Mount Point /nix/store
0/8
hosts
System load average
System load averages over 1, 5, and 15 minutes. This is an unnormalised value not divided by number of CPUs, so it is only meaningful if all machines have the same core count.
1 min
1.96
5 min
1.50
15 min
1.06
CPU overloaded hosts
Percentage of hosts that experienced CPU overload
8.76
%
CPU pressure
CPU pressure over 10, 60, and 300 second averages
10 second average
p5
0
%
<
mean
23.6099
%
SD
18.2469
%
<
p95
45.68
%
60 second average
22.6137
%
300 second average
17.4334
%
Memory pressure some
Memory pressure (some tasks stalled) over 10, 60, and 300 second averages
10 second average
p5
0
%
mean
0
%
SD
0
%
p95
0
%
60 second average
0.0000
%
300 second average
0.0000
%
Memory pressure full
Memory pressure (all tasks stalled) over 10, 60, and 300 second averages
10 second average
p5
0
%
mean
0
%
SD
0
%
p95
0
%
60 second average
0.0000
%
300 second average
0.0000
%
I/O pressure some
I/O pressure (some tasks stalled) over 10, 60 and 300 second averages
10 second average
p5
0
%
<
mean
4.3815
%
SD
6.7727
%
<
p95
11.74
%
60 second average
4.2262
%
300 second average
3.3019
%
I/O pressure full
I/O pressure (all tasks stalled) over 10, 60 and 300 second averages
10 second average
p5
0
%
<
mean
2.8811
%
SD
6.1645
%
<
p95
8.96
%
60 second average
2.7762
%
300 second average
2.1786
%
Holochain process CPU usage
CPU usage by Holochain process
p5
0
%
<
mean
48.55
%
SD
27.46
%
<
p95
80.94
%
Holochain process memory (PSS)
Proportional Set Size memory of Holochain process
p5
214.56
KiB
<
mean
283.03
KiB
SD
57.11
KiB
<
p95
377.03
KiB
Holochain process threads
Number of threads in Holochain process
p5
5
threads
<
mean
26.15
threads
SD
13.64
threads
<
p95
46
threads
Holochain process file descriptors
Number of file descriptors used by Holochain process
p5
32
file descriptors
<
mean
67.3
file descriptors
SD
19.86
file descriptors
<
p95
86
file descriptors