DHT Sync Lag

↑ Back to index

Measures the lag time between an agent publishing data and other peers being able to see it. This scenario has two roles:

  • write: Creates entries with a timestamp field, linked to a known base hash so that record_lag agents can discover them.
  • record_lag: Repeatedly queries for links from the known base hash. When a new entry is discovered, it calculates the time difference between the entry's timestamp and the current time, giving the sync lag for that entry.
Started
Thu, 23 Apr 2026 11:14:23 UTC
Peer count
250
Peer count at end
54781
Behaviours
  • record_lag (1 agent)
  • write (1 agent)
Holochain version
0.6.1-rc.4
Wind Tunnel version
0.6.0
Run ID
dht_sync_lag_24829354002_1
Create rate
The average number of created records per agent.
Mean
mean
45.5
/s
Max
Highest per-partition mean rate
max
98.94
/s
Min
Lowest per-partition mean rate
min
9.95
/s
Sync lag timing
The average time between when a record was created and when it is first seen by an agent.
Mean
mean
1.23712168686e+31
s
SD
2.74696872328e+31
s
Max
Highest per-partition mean latency
max mean
1.4236303105900002e+32
s
Min
Lowest per-partition mean latency
min mean
1.076874
s
Sync lag rate
The average number of created records discovered per agent.
Mean
mean
42.12
/s
Max
Highest per-partition mean rate
max
322.67
/s
Min
Lowest per-partition mean rate
min
0
/s
Delivery ratio
The proportion of created entries successfully received by all record_lag agents.
0.0003
Sent count
The total number of entries sent (created) by write agents, by agent.
Total
1648570
Agents affected
202 / 202
Mean per agent
Total count divided by number of partitions, rounded to nearest whole number
8161
Max
Highest count in any single partition
17831
Min
Lowest count in any single partition
403
Received count
The total number of entries received by record_lag agents, by agent.
Total
17878
Agents affected
34 / 35
Mean per agent
Total count divided by number of partitions, rounded to nearest whole number
511
Max
Highest count in any single partition
2082
Min
Lowest count in any single partition
0
Error count
The number of errors encountered during the scenario.
174
Holochain MetricsClick to toggle visibility.
Cascade duration
Time taken to execute a cascade (get) query inside Holochain.
mean
0.003046
s
SD
0.040708
s
p50
0.001477
s
<
p95
0.06644
s
<
p99
0.212941
s
Cascade fetch errors
Count of network fetch errors during cascade calls.
total
11
over 16870.781879805s
mean rate
314.47
/s
std
2663.13
/s
p5
0
/s
p95
839.4
/s
peak
115748.14
/s
WASM usage: 'timed::created_timed_entry'
Metered usage count of the WASM ribosome for this zome function.
total
520192336
over 39582.021010207s
mean rate
1.83845211525e+10
/s
std
2.34448236372e+11
/s
p5
5.975882379e+07
/s
p95
5.42766696496e+10
/s
peak
1.81520785032e+13
/s
WASM usage: 'timed::get_timed_entries_local'
Metered usage count of the WASM ribosome for this zome function.
total
1079351964
over 3155.504916755s
mean rate
3.9101877694e+11
/s
std
1.67551423398e+12
/s
p5
5.8597595116e+08
/s
p95
1.30413629568e+12
/s
peak
4.03352147151e+13
/s
WASM usage: 'timed_integrity::entry_defs'
Metered usage count of the WASM ribosome for this zome function.
total
58863232
over 39582.021010207s
mean rate
8.82937614001e+09
/s
std
1.85902284893e+11
/s
p5
7.7669779e+06
/s
p95
1.00728336148e+10
/s
peak
1.8884082153e+13
/s
Zome call duration: 'timed::created_timed_entry'
Duration of this zome call as measured by Holochain internally.
mean
0.012558
s
SD
0.012336
s
p50
0.010845
s
<
p95
0.039953
s
<
p99
0.071431
s
Zome call duration: 'timed::get_timed_entries_local'
Duration of this zome call as measured by Holochain internally.
mean
0.027988
s
SD
0.531592
s
p50
0.032424
s
<
p95
0.526739
s
<
p99
2.786198
s
WASM call duration: 'timed::created_timed_entry'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.011038
s
SD
0.011868
s
p50
0.009301
s
<
p95
0.037434
s
<
p99
0.068584
s
WASM call duration: 'timed::get_timed_entries_local'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.027098
s
SD
0.531369
s
p50
0.031106
s
<
p95
0.524401
s
<
p99
2.783775
s
WASM call duration: 'timed_integrity::entry_defs'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.001575
s
SD
0.00065
s
p50
0.001614
s
<
p95
0.002742
s
<
p99
0.004524
s
Host function call duration: '__hc__create_1'
Duration of this host function call invoked from within WASM.
mean
0.001532
s
SD
0.004408
s
p50
0.000628
s
<
p95
0.010956
s
<
p99
0.024198
s
Host function call duration: '__hc__create_link_1'
Duration of this host function call invoked from within WASM.
mean
0.001584
s
SD
0.004661
s
p50
0.000612
s
<
p95
0.011769
s
<
p99
0.026014
s
Host function call duration: '__hc__get_1'
Duration of this host function call invoked from within WASM.
mean
0.003291
s
SD
0.056379
s
p50
0.002097
s
<
p95
0.106501
s
<
p99
0.32752
s
Host function call duration: '__hc__get_links_1'
Duration of this host function call invoked from within WASM.
mean
0.001428
s
SD
0.008025
s
p50
0.001553
s
<
p95
0.016505
s
<
p99
0.047723
s
Host function call duration: '__hc__zome_info_1'
Duration of this host function call invoked from within WASM.
mean
0.002782
s
SD
0.001479
s
p50
0.002789
s
<
p95
0.005995
s
<
p99
0.009947
s
Post-commit duration
Time spent executing post-commit workflows.
mean
0.000892
s
SD
0.00056
s
p50
0.000871
s
<
p95
0.001962
s
<
p99
0.003104
s
Conductor uptime
Conductor uptime gauge. A drop in the trend indicates a restart during the run.
p5
93.35
s
<
mean
917.62
s
SD
530.58
s
<
p95
1746.14
s
Integrated ops
Count of DHT ops integrated since the conductor started. Resets on restart.
total
38231
over 39582.02101216s
mean rate
5.26279937e+06
/s
std
1.0802736124e+08
/s
p5
7656.49
/s
p95
6.04257967e+06
/s
peak
8.58971141782e+09
/s
Integration delay
Delay between an op being stored and being integrated. High values indicate the pipeline is falling behind.
mean
46.258504
s
SD
73.626584
s
p50
6.887086
s
<
p95
171.137542
s
<
p99
374.820197
s
Validation attempts per op
Number of validation attempts required per op. Values consistently above 1 indicate retries.
mean
1.217919
s
SD
0.340433
s
p50
1.07905
s
<
p95
2.005932
s
<
p99
2.129032
s
App validation workflow duration
Time spent running the app validation workflow.
mean
6.7345
s
SD
26.279008
s
p50
2.329113
s
<
p95
71.708952
s
<
p99
128.80028
s
Countersigning workflow duration
Time spent running the countersigning workflow.
mean
0.00477
s
SD
0.004497
s
p50
0.003629
s
<
p95
0.014162
s
<
p99
0.022156
s
Integrate DHT ops workflow duration
Time spent running the integration workflow.
mean
0.022805
s
SD
0.406638
s
p50
0.01349
s
<
p95
0.153335
s
<
p99
0.84309
s
Publish DHT ops workflow duration
Time spent running the publish workflow.
mean
0.856491
s
SD
2.146987
s
p50
0.276122
s
<
p95
5.55026
s
<
p99
9.062512
s
System validation workflow duration
Time spent running the sys validation workflow.
mean
15.235875
s
SD
50.684097
s
p50
21.251477
s
<
p95
142.888848
s
<
p99
232.86902
s
Validation receipt workflow duration
Time spent running the validation receipt workflow.
mean
0.132073
s
SD
20.804735
s
p50
0.095641
s
<
p95
32.584671
s
<
p99
79.381096
s
Authored DB connection use time
Time spent holding authored database connections.
mean
0.005332
s
SD
0.007865
s
p50
0.008744
s
<
p95
0.018504
s
<
p99
0.034826
s
DHT DB connection use time
Time spent holding DHT database connections.
mean
0.001828
s
SD
0.004042
s
p50
0.002578
s
<
p95
0.010677
s
<
p99
0.014056
s
Conductor DB connection use time
Time spent holding conductor database connections.
mean
0.000258
s
SD
0.000148
s
p50
0.000312
s
<
p95
0.000548
s
<
p99
0.000794
s
Cache DB connection use time
Time spent holding cache database connections.
mean
0.001912
s
SD
0.007356
s
p50
0.000225
s
<
p95
0.002626
s
<
p99
0.032121
s
WASM DB connection use time
Time spent holding WASM database connections.
mean
0.000251
s
SD
0.000157
s
p50
0.000242
s
<
p95
0.000453
s
<
p99
0.000878
s
Peer meta store DB connection use time
Time spent holding peer meta store database connections.
mean
0.000162
s
SD
0.000122
s
p50
0.000153
s
<
p95
0.000366
s
<
p99
0.000598
s
Write transaction duration
Duration of exclusive write transactions across all databases.
mean
0.006237
s
SD
0.102778
s
p50
0.005096
s
<
p95
0.271463
s
<
p99
0.526671
s
Lair keystore: signing
Duration of Ed25519 signing requests to the Lair keystore.
mean
0.000446
s
SD
0.004141
s
p50
0.000457
s
<
p95
0.00979
s
<
p99
0.022234
s
P2P request roundtrip: 'get'
Time spent sending a get request and awaiting its response.
mean
2.765146
s
SD
29.220726
s
p50
2.935082
s
<
p95
60.017154
s
<
p99
60.268037
s
P2P request roundtrip: 'send_validation_receipts'
Time spent sending a send_validation_receipts request and awaiting its response.
mean
1.023277
s
SD
25.190164
s
p50
1.366051
s
<
p95
60.01035
s
<
p99
60.030734
s
Host MetricsClick to toggle visibility.
User CPU usage
CPU usage by user space
42.06
%
Network receive rate (primary)
Rate of bytes received on primary network interface
p5
21.83
KiB/s
<
mean
564.72
KiB/s
SD
453.31
KiB/s
<
p95
1.35
MiB/s
Network send rate (primary)
Rate of bytes sent on primary network interface
p5
4.42
KiB/s
<
mean
141.69
KiB/s
SD
351.85
KiB/s
<
p95
573.59
KiB/s
Total bytes received
Total bytes received on primary network interface
count
245.34
GiB
mean
14.83
MiB/s
Total bytes sent
Total bytes sent on primary network interface
count
60.45
GiB
mean
3.65
MiB/s
CPU spike anomaly
CPU spike anomaly was detected
Detected

Warning CPU p99 reached 96.0%

Memory leak anomaly
Memory leak anomaly was detected
Detected

Warning Memory growing at 972.32 MiB/s

Disk full anomaly
Disk full anomaly was detected
NotDetected
Swap thrashing anomaly
Swap thrashing anomaly was detected
Detected

Critical Heavy swap usage (50.5% swap used)

System overload anomaly
System overload anomaly was detected
Detected

Warning 23% of hosts overloaded (load5/ncpus > 1.0)

CPU usage
Total CPU usage and kernel CPU usage
Total
p5
18.13
%
<
mean
55.77
%
SD
20.51
%
<
p95
90.61
%
System
13.71
%
CPU percentiles
CPU usage percentiles
p50
52.72
%
p95
90.61
%
p99
96.04
%
CPU usage above 80%
Number of hosts above 80% CPU and mean time spent above threshold for those hosts
count
214
hosts
mean time
0.17
s
Memory used percentage
Percentage of memory used
p5
7.72
%
<
mean
11.28
%
SD
7.23
%
<
p95
16.71
%
Memory available percentage
Percentage of available memory
p5
83.29
%
<
mean
88.72
%
SD
7.23
%
<
p95
92.28
%
Max host memory used
Maximum memory usage percentage across all hosts
max
78.21
%
Max host swap used percentage
Maximum swap space usage percentage across all hosts
max
50.47
%
Memory growth rate
Rate of memory growth over time
growth
972.32
MiB/s
Disk read throughput
Disk read throughput in MiB/s
3.74
MiB/s
Disk write throughput
Disk write throughput in MiB/s
379.52
MiB/s
Disk space utilization risk
Number of hosts nearing disk space capacity by mount point
Mount Point /
0/205
hosts
Mount Point /efi-boot
0/6
hosts
Mount Point /etc/hostname
0/15
hosts
Mount Point /etc/hosts
0/15
hosts
Mount Point /etc/resolv.conf
0/15
hosts
Mount Point /nix/store
0/10
hosts
System load average
System load averages over 1, 5, and 15 minutes. This is an unnormalised value not divided by number of CPUs, so it is only meaningful if all machines have the same core count.
1 min
2.14
5 min
1.87
15 min
1.36
CPU overloaded hosts
Percentage of hosts that experienced CPU overload
22.75
%
CPU pressure
CPU pressure over 10, 60, and 300 second averages
10 second average
p5
0.19
%
<
mean
18.6093
%
SD
18.9106
%
<
p95
56.15
%
60 second average
18.1224
%
300 second average
15.7241
%
Memory pressure some
Memory pressure (some tasks stalled) over 10, 60, and 300 second averages
10 second average
p5
0
%
mean
0
%
SD
0.0047
%
p95
0
%
60 second average
0.0000
%
300 second average
0.0000
%
Memory pressure full
Memory pressure (all tasks stalled) over 10, 60, and 300 second averages
10 second average
p5
0
%
mean
0
%
SD
0
%
p95
0
%
60 second average
0.0000
%
300 second average
0.0000
%
I/O pressure some
I/O pressure (some tasks stalled) over 10, 60 and 300 second averages
10 second average
p5
0
%
<
mean
4.8705
%
SD
8.571
%
<
p95
20.97
%
60 second average
4.7584
%
300 second average
4.1038
%
I/O pressure full
I/O pressure (all tasks stalled) over 10, 60 and 300 second averages
10 second average
p5
0
%
<
mean
3.1614
%
SD
6.8356
%
<
p95
15.21
%
60 second average
3.0766
%
300 second average
2.6413
%
Holochain process CPU usage
CPU usage by Holochain process
p5
18.9
%
<
mean
52.49
%
SD
19.24
%
<
p95
86.18
%
Holochain process memory (PSS)
Proportional Set Size memory of Holochain process
p5
205.37
KiB
<
mean
316.46
KiB
SD
104.93
KiB
<
p95
516.33
KiB
Holochain process threads
Number of threads in Holochain process
p5
12
threads
<
mean
27.86
threads
SD
13.02
threads
<
p95
52
threads
Holochain process file descriptors
Number of file descriptors used by Holochain process
p5
56
file descriptors
<
mean
74.29
file descriptors
SD
15.07
file descriptors
<
p95
96
file descriptors

Remote Call Rate

↑ Back to index

Test the throughput of remote_call operations. Each agent in this scenario waits for a certain number of peers to be available or for up to two minutes, whichever happens first, before starting its behaviour.

Started
Thu, 23 Apr 2026 10:25:05 UTC
Peer count
250
Peer count at end
55030
Behaviours
  • default (1 agent)
Holochain version
0.6.1-rc.4
Wind Tunnel version
0.6.0
Run ID
remote_call_rate_24829354002_1
Dispatch timing
The time between sending a remote call and the remote handler being invoked.
Mean
mean
-12.753365
s
SD
64.843744
s
Max
Highest per-partition mean latency
max mean
92.600936
s
Min
Lowest per-partition mean latency
min mean
-137.354955
s
Round-trip timing
The total elapsed time to get a response to the client.
Mean
mean
0.377613
s
SD
1.271415
s
Max
Highest per-partition mean latency
max mean
1.265496
s
Min
Lowest per-partition mean latency
min mean
0.134897
s
Error count
The total number of errors accumulated during the run.
4470
Holochain MetricsClick to toggle visibility.
Cascade fetch errors
Count of network fetch errors during cascade calls.
total
0
over 1180.653522963s
Not enough time data to show a trend.
WASM usage: 'remote_call::call_echo_timestamp'
Metered usage count of the WASM ribosome for this zome function.
total
467146
over 18950.515344695s
mean rate
7.7031028061e+08
/s
std
9.03877412692e+09
/s
p5
1.31395997e+06
/s
p95
1.71689380208e+09
/s
peak
4.19423803735e+11
/s
WASM usage: 'remote_call::echo_timestamp'
Metered usage count of the WASM ribosome for this zome function.
total
19059432
over 2939.584373468s
mean rate
1.8590660354e+08
/s
std
2.93970052402e+09
/s
p5
101329
/s
p95
4.3110430519e+08
/s
peak
2.69903563941e+11
/s
WASM usage: 'remote_call::init'
Metered usage count of the WASM ribosome for this zome function.
total
0
over 37343.276267545s
Not enough time data to show a trend.
Zome call duration: 'remote_call::call_echo_timestamp'
Duration of this zome call as measured by Holochain internally.
mean
0.343502
s
SD
0.392409
s
p50
0.363066
s
<
p95
0.954333
s
<
p99
2.212131
s
Zome call duration: 'remote_call::echo_timestamp'
Duration of this zome call as measured by Holochain internally.
mean
0.003946
s
SD
0.002336
s
p50
0.003748
s
<
p95
0.005727
s
<
p99
0.00819
s
WASM call duration: 'remote_call::call_echo_timestamp'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
2.115867
s
SD
5.028503
s
p50
2.596859
s
<
p95
6.064563
s
<
p99
12.239272
s
WASM call duration: 'remote_call::echo_timestamp'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.00236
s
SD
0.001349
s
p50
0.002259
s
<
p95
0.003604
s
<
p99
0.005453
s
WASM call duration: 'remote_call::init'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.003472
s
SD
0.002021
s
p50
0.003249
s
<
p95
0.006741
s
<
p99
0.00947
s
Host function call duration: '__hc__call_1'
Duration of this host function call invoked from within WASM.
mean
2.113154
s
SD
5.028403
s
p50
2.593903
s
<
p95
6.063593
s
<
p99
12.233635
s
Host function call duration: '__hc__create_1'
Duration of this host function call invoked from within WASM.
mean
0.000609
s
SD
0.000378
s
p50
0.000562
s
<
p95
0.001209
s
<
p99
0.002308
s
Host function call duration: '__hc__sys_time_1'
Duration of this host function call invoked from within WASM.
mean
2.6e-05
s
SD
2.1e-05
s
p50
1.9e-05
s
<
p95
4.2e-05
s
<
p99
5.4e-05
s
Host function call duration: '__hc__zome_info_1'
Duration of this host function call invoked from within WASM.
mean
0.000456
s
SD
0.000457
s
p50
0.000412
s
<
p95
0.000958
s
<
p99
0.002311
s
Post-commit duration
Time spent executing post-commit workflows.
mean
0.000692
s
SD
0.000691
s
p50
0.000438
s
<
p95
0.002061
s
<
p99
0.003532
s
Conductor uptime
Conductor uptime gauge. A drop in the trend indicates a restart during the run.
p5
46.01
s
<
mean
450.61
s
SD
259.64
s
<
p95
855.91
s
Integration delay
Delay between an op being stored and being integrated. High values indicate the pipeline is falling behind.
mean
35.040524
s
SD
20.688467
s
p50
26.42788
s
<
p95
66.264007
s
<
p99
84.612116
s
Validation attempts per op
Number of validation attempts required per op. Values consistently above 1 indicate retries.
mean
2.063006
s
SD
0.227043
s
p50
2.039314
s
<
p95
2.187879
s
<
p99
2.327299
s
App validation workflow duration
Time spent running the app validation workflow.
mean
0.446043
s
SD
1.025766
s
p50
0.422361
s
<
p95
2.669738
s
<
p99
4.702243
s
Countersigning workflow duration
Time spent running the countersigning workflow.
mean
0.004951
s
SD
0.004816
s
p50
0.003522
s
<
p95
0.012815
s
<
p99
0.02393
s
Integrate DHT ops workflow duration
Time spent running the integration workflow.
mean
0.010142
s
SD
0.050322
s
p50
0.004097
s
<
p95
0.019266
s
<
p99
0.052377
s
Publish DHT ops workflow duration
Time spent running the publish workflow.
mean
0.005571
s
SD
0.002978
s
p50
0.005613
s
<
p95
0.010943
s
<
p99
0.015617
s
System validation workflow duration
Time spent running the sys validation workflow.
mean
11.542388
s
SD
12.950762
s
p50
13.344758
s
<
p95
42.380305
s
<
p99
57.251194
s
Validation receipt workflow duration
Time spent running the validation receipt workflow.
mean
11.317889
s
SD
24.254474
s
p50
0.972013
s
<
p95
64.210435
s
<
p99
117.350134
s
Authored DB connection use time
Time spent holding authored database connections.
mean
0.000263
s
SD
0.000132
s
p50
0.000276
s
<
p95
0.000432
s
<
p99
0.000654
s
DHT DB connection use time
Time spent holding DHT database connections.
mean
0.000734
s
SD
0.001166
s
p50
0.000262
s
<
p95
0.001009
s
<
p99
0.002372
s
Conductor DB connection use time
Time spent holding conductor database connections.
mean
0.000193
s
SD
0.000107
s
p50
0.00018
s
<
p95
0.000269
s
<
p99
0.000373
s
Cache DB connection use time
Time spent holding cache database connections.
mean
0.000166
s
SD
0.000103
s
p50
0.000153
s
<
p95
0.000255
s
<
p99
0.000489
s
WASM DB connection use time
Time spent holding WASM database connections.
mean
0.00024
s
SD
0.000172
s
p50
0.000224
s
<
p95
0.000476
s
<
p99
0.000754
s
Peer meta store DB connection use time
Time spent holding peer meta store database connections.
mean
0.00014
s
SD
0.000106
s
p50
0.000126
s
<
p95
0.000199
s
<
p99
0.000312
s
Write transaction duration
Duration of exclusive write transactions across all databases.
mean
0.00141
s
SD
0.087403
s
p50
0.001503
s
<
p95
0.245381
s
<
p99
0.375184
s
Lair keystore: signing
Duration of Ed25519 signing requests to the Lair keystore.
mean
0.000646
s
SD
0.00206
s
p50
0.000471
s
<
p95
0.000795
s
<
p99
0.002233
s
P2P request roundtrip: 'get'
Time spent sending a get request and awaiting its response.
mean
0.777885
s
SD
29.70204
s
p50
0.53145
s
<
p95
60.003814
s
<
p99
66.681754
s
P2P request roundtrip: 'send_validation_receipts'
Time spent sending a send_validation_receipts request and awaiting its response.
mean
0.639322
s
SD
28.130902
s
p50
0.38684
s
<
p95
60.001746
s
<
p99
72.364177
s
P2P request roundtrip: 'call_remote'
Time spent sending a call_remote request and awaiting its response.
mean
0.957454
s
SD
29.996851
s
p50
0.67769
s
<
p95
60.001772
s
<
p99
69.43682
s
Host MetricsClick to toggle visibility.
User CPU usage
CPU usage by user space
6.82
%
Network receive rate (primary)
Rate of bytes received on primary network interface
p5
449
B/s
<
mean
675.25
KiB/s
SD
684.57
KiB/s
<
p95
1.79
MiB/s
Network send rate (primary)
Rate of bytes sent on primary network interface
p5
164
B/s
<
mean
71.70
KiB/s
SD
69.27
KiB/s
<
p95
186.68
KiB/s
Total bytes received
Total bytes received on primary network interface
count
165.35
GiB
mean
7.31
MiB/s
Total bytes sent
Total bytes sent on primary network interface
count
18.84
GiB
mean
852.69
KiB/s
CPU spike anomaly
CPU spike anomaly was detected
NotDetected
Memory leak anomaly
Memory leak anomaly was detected
Detected

Warning Memory growing at 703.78 MiB/s

Disk full anomaly
Disk full anomaly was detected
NotDetected
Swap thrashing anomaly
Swap thrashing anomaly was detected
Detected

Critical Heavy swap usage (47.6% swap used)

System overload anomaly
System overload anomaly was detected
Detected

Warning 1% of hosts overloaded (load5/ncpus > 1.0)

CPU usage
Total CPU usage and kernel CPU usage
Total
p5
0.65
%
<
mean
10.12
%
SD
13.7
%
<
p95
32.2
%
System
3.30
%
CPU percentiles
CPU usage percentiles
p50
6.97
%
p95
32.20
%
p99
85.63
%
CPU usage above 80%
Number of hosts above 80% CPU and mean time spent above threshold for those hosts
count
157
hosts
mean time
0.01
s
Memory used percentage
Percentage of memory used
p5
5.87
%
<
mean
10
%
SD
6.78
%
<
p95
14.25
%
Memory available percentage
Percentage of available memory
p5
85.75
%
<
mean
90
%
SD
6.78
%
<
p95
94.13
%
Max host memory used
Maximum memory usage percentage across all hosts
max
77.26
%
Max host swap used percentage
Maximum swap space usage percentage across all hosts
max
47.56
%
Memory growth rate
Rate of memory growth over time
growth
703.78
MiB/s
Disk read throughput
Disk read throughput in MiB/s
0.18
MiB/s
Disk write throughput
Disk write throughput in MiB/s
48.59
MiB/s
Disk space utilization risk
Number of hosts nearing disk space capacity by mount point
Mount Point /
0/214
hosts
Mount Point /efi-boot
0/7
hosts
Mount Point /etc/hostname
0/16
hosts
Mount Point /etc/hosts
0/16
hosts
Mount Point /etc/resolv.conf
0/16
hosts
Mount Point /nix/store
0/11
hosts
System load average
System load averages over 1, 5, and 15 minutes. This is an unnormalised value not divided by number of CPUs, so it is only meaningful if all machines have the same core count.
1 min
0.34
5 min
0.27
15 min
0.16
CPU overloaded hosts
Percentage of hosts that experienced CPU overload
0.82
%
CPU pressure
CPU pressure over 10, 60, and 300 second averages
10 second average
p5
0
%
<
mean
2.2048
%
SD
7.1382
%
<
p95
6
%
60 second average
2.1330
%
300 second average
1.6512
%
Memory pressure some
Memory pressure (some tasks stalled) over 10, 60, and 300 second averages
10 second average
p5
0
%
<
mean
0.0001
%
SD
0.014
%
p95
0
%
60 second average
0.0000
%
300 second average
0.0000
%
Memory pressure full
Memory pressure (all tasks stalled) over 10, 60, and 300 second averages
10 second average
p5
0
%
<
mean
0.0001
%
SD
0.014
%
p95
0
%
60 second average
0.0000
%
300 second average
0.0000
%
I/O pressure some
I/O pressure (some tasks stalled) over 10, 60 and 300 second averages
10 second average
p5
0
%
<
mean
0.5446
%
SD
2.6014
%
<
p95
2.22
%
60 second average
0.5937
%
300 second average
0.5526
%
I/O pressure full
I/O pressure (all tasks stalled) over 10, 60 and 300 second averages
10 second average
p5
0
%
<
mean
0.3239
%
SD
1.7831
%
<
p95
1.44
%
60 second average
0.3765
%
300 second average
0.3800
%
Holochain process CPU usage
CPU usage by Holochain process
p5
0
%
<
mean
8.07
%
SD
12.61
%
<
p95
19.9
%
Holochain process memory (PSS)
Proportional Set Size memory of Holochain process
p5
210.60
KiB
<
mean
234.81
KiB
SD
30.97
KiB
<
p95
262.55
KiB
Holochain process threads
Number of threads in Holochain process
p5
1
threads
<
mean
26.55
threads
SD
14.46
threads
<
p95
54
threads
Holochain process file descriptors
Number of file descriptors used by Holochain process
p5
0
file descriptors
<
mean
70.55
file descriptors
SD
23.04
file descriptors
<
p95
95
file descriptors

Remote Signals

↑ Back to index

This scenario tests the throughput of remote_signals operations.

Started
Thu, 23 Apr 2026 10:46:46 UTC
Peer count
500
Peer count at end
109562
Behaviours
  • default (1 agent)
Holochain version
0.6.1-rc.4
Wind Tunnel version
0.6.0
Run ID
remote_signals_24829354002_1
Round trip time
The time from origin signal dispatch to origin receive of the remote side's response signal.
mean
1.232596
s
SD
9.401961
s
p50
0.263066
s
<
p95
1.833116
s
<
p99
21.41636
s
Timeouts
The number of timeouts waiting for the remote side's response signal (default timeout is 20 seconds).
total
537
over 17640.467146702s
mean rate
1.20236472e+06
/s
std
1.1297970781e+08
/s
p5
0.33
/s
p95
69428.14
/s
peak
2.03333333333e+10
/s
Timeout rate
The proportion of remote signal round-trips that timed out.
0.0025
Holochain MetricsClick to toggle visibility.
Cascade fetch errors
Count of network fetch errors during cascade calls.
total
0
over 17557.434629966s
Not enough time data to show a trend.
WASM usage: 'remote_signal::init'
Metered usage count of the WASM ribosome for this zome function.
total
0
over 37357.543184783s
Not enough time data to show a trend.
WASM usage: 'remote_signal::recv_remote_signal'
Metered usage count of the WASM ribosome for this zome function.
total
11919787
over 17675.184246935s
mean rate
9.4310906057e+09
/s
std
1.61723807587e+11
/s
p5
2.19769049e+06
/s
p95
1.78228816096e+10
/s
peak
1.13705644961e+13
/s
WASM usage: 'remote_signal::signal_request'
Metered usage count of the WASM ribosome for this zome function.
total
15024510
over 17675.184246935s
mean rate
3.4802894276e+08
/s
std
1.03748351547e+10
/s
p5
0
/s
p95
4.7669879954e+08
/s
peak
1.0673418735e+12
/s
Zome call duration: 'remote_signal::recv_remote_signal'
Duration of this zome call as measured by Holochain internally.
mean
0.004068
s
SD
0.005615
s
p50
0.003829
s
<
p95
0.005279
s
<
p99
0.00708
s
Zome call duration: 'remote_signal::signal_request'
Duration of this zome call as measured by Holochain internally.
mean
0.004433
s
SD
0.005626
s
p50
0.004207
s
<
p95
0.005886
s
<
p99
0.007235
s
WASM call duration: 'remote_signal::init'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.003066
s
SD
0.001769
s
p50
0.002807
s
<
p95
0.006074
s
<
p99
0.009282
s
WASM call duration: 'remote_signal::recv_remote_signal'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.002482
s
SD
0.003236
s
p50
0.002347
s
<
p95
0.003351
s
<
p99
0.005008
s
WASM call duration: 'remote_signal::signal_request'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.00251
s
SD
0.00286
s
p50
0.002417
s
<
p95
0.003424
s
<
p99
0.004593
s
Host function call duration: '__hc__create_1'
Duration of this host function call invoked from within WASM.
mean
0.000557
s
SD
0.000502
s
p50
0.000471
s
<
p95
0.001252
s
<
p99
0.00267
s
Host function call duration: '__hc__emit_signal_1'
Duration of this host function call invoked from within WASM.
mean
0.000101
s
SD
0.000161
s
p50
9.4e-05
s
<
p95
0.000138
s
<
p99
0.000185
s
Host function call duration: '__hc__send_remote_signal_1'
Duration of this host function call invoked from within WASM.
mean
0.000109
s
SD
0.000174
s
p50
8.9e-05
s
<
p95
0.000166
s
<
p99
0.000248
s
Host function call duration: '__hc__sys_time_1'
Duration of this host function call invoked from within WASM.
mean
1.9e-05
s
SD
4.3e-05
s
p50
1.5e-05
s
<
p95
2.8e-05
s
<
p99
4.8e-05
s
Host function call duration: '__hc__zome_info_1'
Duration of this host function call invoked from within WASM.
mean
0.000502
s
SD
0.000557
s
p50
0.000373
s
<
p95
0.001122
s
<
p99
0.002773
s
Emitted local signals
Count of local signals emitted via the emit_signal host function.
total
559
over 17675.184271835s
mean rate
2.2584857e+06
/s
std
2.3863295029e+08
/s
p5
54.65
/s
p95
825322
/s
peak
3.33333333333e+10
/s
Sent remote signals
Count of remote signals sent via the send_remote_signal host function.
total
1058
over 17675.184272608s
mean rate
451639.88
/s
std
6.01357369e+06
/s
p5
24.31
/s
p95
769242.04
/s
peak
4.0384615385e+08
/s
Post-commit duration
Time spent executing post-commit workflows.
mean
0.000796
s
SD
0.001175
s
p50
0.000499
s
<
p95
0.002181
s
<
p99
0.005823
s
Conductor uptime
Conductor uptime gauge. A drop in the trend indicates a restart during the run.
p5
45.2
s
<
mean
448.58
s
SD
260.09
s
<
p95
855.43
s
Integration delay
Delay between an op being stored and being integrated. High values indicate the pipeline is falling behind.
mean
50.637833
s
SD
29.482848
s
p50
35.021584
s
<
p95
83.281678
s
<
p99
139.771657
s
Validation attempts per op
Number of validation attempts required per op. Values consistently above 1 indicate retries.
mean
2.027002
s
SD
0.320692
s
p50
2.00766
s
<
p95
2.063331
s
<
p99
2.119325
s
App validation workflow duration
Time spent running the app validation workflow.
mean
1.150754
s
SD
2.407022
s
p50
0.560184
s
<
p95
5.410797
s
<
p99
8.788292
s
Countersigning workflow duration
Time spent running the countersigning workflow.
mean
0.005948
s
SD
0.007272
s
p50
0.003587
s
<
p95
0.020818
s
<
p99
0.035823
s
Integrate DHT ops workflow duration
Time spent running the integration workflow.
mean
0.010598
s
SD
0.297922
s
p50
0.005036
s
<
p95
0.016703
s
<
p99
0.060487
s
Publish DHT ops workflow duration
Time spent running the publish workflow.
mean
0.008451
s
SD
0.027367
s
p50
0.005936
s
<
p95
0.014159
s
<
p99
0.029799
s
System validation workflow duration
Time spent running the sys validation workflow.
mean
27.849803
s
SD
22.941693
s
p50
25.446563
s
<
p95
69.478748
s
<
p99
90.358732
s
Validation receipt workflow duration
Time spent running the validation receipt workflow.
mean
9.135676
s
SD
17.754973
s
p50
0.776286
s
<
p95
45.576386
s
<
p99
80.013323
s
Authored DB connection use time
Time spent holding authored database connections.
mean
0.000274
s
SD
0.000206
s
p50
0.000286
s
<
p95
0.000544
s
<
p99
0.001354
s
DHT DB connection use time
Time spent holding DHT database connections.
mean
0.000563
s
SD
0.002376
s
p50
0.000177
s
<
p95
0.000919
s
<
p99
0.002672
s
Conductor DB connection use time
Time spent holding conductor database connections.
mean
0.000238
s
SD
0.000207
s
p50
0.000217
s
<
p95
0.00043
s
<
p99
0.001185
s
Cache DB connection use time
Time spent holding cache database connections.
mean
0.000175
s
SD
0.000216
s
p50
0.000158
s
<
p95
0.000336
s
<
p99
0.000612
s
WASM DB connection use time
Time spent holding WASM database connections.
mean
0.000255
s
SD
0.00031
s
p50
0.000196
s
<
p95
0.000512
s
<
p99
0.001748
s
Peer meta store DB connection use time
Time spent holding peer meta store database connections.
mean
0.000161
s
SD
0.000242
s
p50
0.000131
s
<
p95
0.000256
s
<
p99
0.000445
s
Write transaction duration
Duration of exclusive write transactions across all databases.
mean
0.001555
s
SD
0.118788
s
p50
0.001764
s
<
p95
0.341865
s
<
p99
0.534824
s
Lair keystore: signing
Duration of Ed25519 signing requests to the Lair keystore.
mean
0.00144
s
SD
0.008832
s
p50
0.000724
s
<
p95
0.001375
s
<
p99
0.003785
s
P2P request roundtrip: 'get'
Time spent sending a get request and awaiting its response.
mean
2.91793
s
SD
29.466212
s
p50
1.237466
s
<
p95
60.002836
s
<
p99
61.41686
s
P2P request roundtrip: 'send_validation_receipts'
Time spent sending a send_validation_receipts request and awaiting its response.
mean
1.018384
s
SD
27.29411
s
p50
0.499264
s
<
p95
60.001665
s
<
p99
60.002161
s
P2P received remote signals
Count of remote signals received.
total
559
over 17675.184331438s
mean rate
436660.7
/s
std
1.005832202e+07
/s
p5
31.13
/s
p95
823472.2
/s
peak
1.0487804878e+09
/s
Host MetricsClick to toggle visibility.
User CPU usage
CPU usage by user space
13.33
%
Network receive rate (primary)
Rate of bytes received on primary network interface
p5
46.43
KiB/s
<
mean
1.07
MiB/s
SD
1.06
MiB/s
<
p95
3.15
MiB/s
Network send rate (primary)
Rate of bytes sent on primary network interface
p5
9.37
KiB/s
<
mean
120.44
KiB/s
SD
86.94
KiB/s
<
p95
263.75
KiB/s
Total bytes received
Total bytes received on primary network interface
count
226.67
GiB
mean
13.14
MiB/s
Total bytes sent
Total bytes sent on primary network interface
count
28.26
GiB
mean
1.64
MiB/s
CPU spike anomaly
CPU spike anomaly was detected
Detected

Warning CPU p99 reached 98.2%

Memory leak anomaly
Memory leak anomaly was detected
Detected

Warning Memory growing at 1306.23 MiB/s

Disk full anomaly
Disk full anomaly was detected
NotDetected
Swap thrashing anomaly
Swap thrashing anomaly was detected
Detected

Critical Heavy swap usage (27.0% swap used)

System overload anomaly
System overload anomaly was detected
Detected

Warning 3% of hosts overloaded (load5/ncpus > 1.0)

CPU usage
Total CPU usage and kernel CPU usage
Total
p5
3.03
%
<
mean
20.71
%
SD
21.91
%
<
p95
76.7
%
System
7.38
%
CPU percentiles
CPU usage percentiles
p50
13.22
%
p95
76.70
%
p99
98.15
%
CPU usage above 80%
Number of hosts above 80% CPU and mean time spent above threshold for those hosts
count
184
hosts
mean time
0.05
s
Memory used percentage
Percentage of memory used
p5
6.89
%
<
mean
11.92
%
SD
6.75
%
<
p95
16.41
%
Memory available percentage
Percentage of available memory
p5
83.59
%
<
mean
88.08
%
SD
6.75
%
<
p95
93.11
%
Max host memory used
Maximum memory usage percentage across all hosts
max
78.96
%
Max host swap used percentage
Maximum swap space usage percentage across all hosts
max
26.96
%
Memory growth rate
Rate of memory growth over time
growth
1306.23
MiB/s
Disk read throughput
Disk read throughput in MiB/s
1.16
MiB/s
Disk write throughput
Disk write throughput in MiB/s
115.86
MiB/s
Disk space utilization risk
Number of hosts nearing disk space capacity by mount point
Mount Point /
0/194
hosts
Mount Point /efi-boot
0/8
hosts
Mount Point /etc/hostname
0/15
hosts
Mount Point /etc/hosts
0/15
hosts
Mount Point /etc/resolv.conf
0/15
hosts
Mount Point /nix/store
0/14
hosts
System load average
System load averages over 1, 5, and 15 minutes. This is an unnormalised value not divided by number of CPUs, so it is only meaningful if all machines have the same core count.
1 min
0.84
5 min
0.63
15 min
0.40
CPU overloaded hosts
Percentage of hosts that experienced CPU overload
3.13
%
CPU pressure
CPU pressure over 10, 60, and 300 second averages
10 second average
p5
0
%
<
mean
10.2124
%
SD
18.2615
%
<
p95
53.58
%
60 second average
9.5156
%
300 second average
7.2103
%
Memory pressure some
Memory pressure (some tasks stalled) over 10, 60, and 300 second averages
10 second average
p5
0
%
mean
0
%
SD
0
%
p95
0
%
60 second average
0.0000
%
300 second average
0.0000
%
Memory pressure full
Memory pressure (all tasks stalled) over 10, 60, and 300 second averages
10 second average
p5
0
%
mean
0
%
SD
0
%
p95
0
%
60 second average
0.0000
%
300 second average
0.0000
%
I/O pressure some
I/O pressure (some tasks stalled) over 10, 60 and 300 second averages
10 second average
p5
0
%
<
mean
0.8959
%
SD
3.4177
%
<
p95
5.49
%
60 second average
0.8050
%
300 second average
0.5826
%
I/O pressure full
I/O pressure (all tasks stalled) over 10, 60 and 300 second averages
10 second average
p5
0
%
<
mean
0.5953
%
SD
2.6722
%
<
p95
3.1
%
60 second average
0.5335
%
300 second average
0.3960
%
Holochain process CPU usage
CPU usage by Holochain process
p5
1.08
%
<
mean
8.06
%
SD
9.46
%
<
p95
25.35
%
Holochain process memory (PSS)
Proportional Set Size memory of Holochain process
p5
181.03
KiB
<
mean
212.98
KiB
SD
30.18
KiB
<
p95
248.76
KiB
Holochain process threads
Number of threads in Holochain process
p5
11
threads
<
mean
26.55
threads
SD
14.78
threads
<
p95
57
threads
Holochain process file descriptors
Number of file descriptors used by Holochain process
p5
44
file descriptors
<
mean
77.13
file descriptors
SD
18.67
file descriptors
<
p95
99
file descriptors

Two-party countersigning

↑ Back to index

This scenario tests the performance of countersigning operations. There are two roles: initiate and participate. The participants commit an entry to advertise that they are willing to participate in sessions. They listen for sessions and participate in one at a time.

Started
Thu, 23 Apr 2026 15:58:32 UTC
Peer count
3
Peer count at end
7
Behaviours
  • initiate (1 agent)
  • participate (1 agent)
Holochain version
0.6.1-rc.4
Wind Tunnel version
0.6.0
Run ID
two_party_countersigning_24829354002_1
Session accepted -- timing
The duration of the session from acceptance to completion.
Mean
mean
0.924022
s
SD
0.408478
s
Max
Highest per-partition mean latency
max mean
1.018264
s
Min
Lowest per-partition mean latency
min mean
0.82978
s
Session accepted -- success rate
The number of accepted sessions that completed per second.
Mean
mean
4.06
/s
Max
Highest per-partition mean rate
max
4.08
/s
Min
Lowest per-partition mean rate
min
4.04
/s
Session accepted -- success ratio
Fraction of accepted sessions that completed successfully (0–1).
1
Session accepted -- failures
Accepted sessions that failed to complete.
Total
1
Agents affected
1 / 1
Mean per agent
Total count divided by number of partitions, rounded to nearest whole number
1
Max
Highest count in any single partition
1
Min
Lowest count in any single partition
1
Not enough time data to show a trend.
Session initiated -- timing
The duration of the session from initiation to completion.
Mean
mean
1.254036
s
SD
0.731481
s
Max
Highest per-partition mean latency
max mean
1.254036
s
Min
Lowest per-partition mean latency
min mean
1.254036
s
Session initiated -- success rate
The number of initiated sessions that completed per second.
Mean
mean
8.15
/s
Max
Highest per-partition mean rate
max
8.15
/s
Min
Lowest per-partition mean rate
min
8.15
/s
Session initiated -- success ratio
Fraction of initiated sessions that completed successfully (0–1).
1
Session initiated -- failures
Initiated sessions that failed to complete.
Total
2
Agents affected
1 / 1
Mean per agent
Total count divided by number of partitions, rounded to nearest whole number
2
Max
Highest count in any single partition
2
Min
Lowest count in any single partition
2
Holochain MetricsClick to toggle visibility.
Cascade duration
Time taken to execute a cascade (get) query inside Holochain.
mean
0.000842
s
SD
0.000167
s
p50
0.000803
s
<
p95
0.001097
s
<
p99
0.001101
s
WASM usage: 'countersigning::accept_two_party'
Metered usage count of the WASM ribosome for this zome function.
total
13657782
over 280.130406958s
mean rate
77823.82
/s
std
16203.7
/s
p5
52199.86
/s
p95
91354.62
/s
peak
104400.68
/s
WASM usage: 'countersigning::call_remote_signal'
Metered usage count of the WASM ribosome for this zome function.
total
7389685
over 280.130406958s
mean rate
415998.49
/s
std
277686.35
/s
p5
20981.11
/s
p95
829682.13
/s
peak
908492.55
/s
WASM usage: 'countersigning::commit_two_party'
Metered usage count of the WASM ribosome for this zome function.
total
30459415
over 280.130406958s
mean rate
4.18392996e+06
/s
std
3.03979961e+06
/s
p5
21922.06
/s
p95
9.2763299e+06
/s
peak
9.41152527e+06
/s
WASM usage: 'countersigning::init'
Metered usage count of the WASM ribosome for this zome function.
total
0
over 298.262730351s
Not enough time data to show a trend.
WASM usage: 'countersigning::initiator_hello'
Metered usage count of the WASM ribosome for this zome function.
total
0
over 270.003196326s
Not enough time data to show a trend.
WASM usage: 'countersigning::list_participants'
Metered usage count of the WASM ribosome for this zome function.
total
10854327
over 270.003196326s
mean rate
40200.75
/s
std
21137.34
/s
p5
26694.07
/s
p95
44490.91
/s
peak
142221.7
/s
WASM usage: 'countersigning::participant_hello'
Metered usage count of the WASM ribosome for this zome function.
total
0
over 280.130406958s
Not enough time data to show a trend.
WASM usage: 'countersigning::start_two_party'
Metered usage count of the WASM ribosome for this zome function.
total
39918763
over 260.003069454s
mean rate
153531.93
/s
std
26412.34
/s
p5
113490.71
/s
p95
189150.73
/s
peak
189236.77
/s
WASM usage: 'countersigning_integrity::entry_defs'
Metered usage count of the WASM ribosome for this zome function.
total
1376431
over 298.262730351s
mean rate
1.13558298e+06
/s
std
655226.27
/s
p5
5139.49
/s
p95
2.18274546e+06
/s
peak
2.22643924e+06
/s
Zome call duration: 'countersigning::accept_two_party'
Duration of this zome call as measured by Holochain internally.
mean
0.330852
s
SD
0.128361
s
p50
0.20679
s
<
p95
0.474824
s
<
p99
0.476035
s
Zome call duration: 'countersigning::call_remote_signal'
Duration of this zome call as measured by Holochain internally.
mean
0.003635
s
SD
0.00077
s
p50
0.003627
s
<
p95
0.004866
s
<
p99
0.004877
s
Zome call duration: 'countersigning::commit_two_party'
Duration of this zome call as measured by Holochain internally.
mean
0.227292
s
SD
0.039174
s
p50
0.207497
s
<
p95
0.314213
s
<
p99
0.317633
s
Zome call duration: 'countersigning::initiator_hello'
Duration of this zome call as measured by Holochain internally.
mean
0.000944
s
SD
0
s
p50
0.000944
s
p95
0.000944
s
p99
0.000944
s
Zome call duration: 'countersigning::list_participants'
Duration of this zome call as measured by Holochain internally.
mean
0.006283
s
SD
3.2e-05
s
p50
0.006279
s
<
p95
0.006326
s
<
p99
0.006327
s
Zome call duration: 'countersigning::participant_hello'
Duration of this zome call as measured by Holochain internally.
mean
0.008893
s
SD
0.002699
s
p50
0.011591
s
p95
0.011591
s
p99
0.011591
s
Zome call duration: 'countersigning::start_two_party'
Duration of this zome call as measured by Holochain internally.
mean
0.334334
s
SD
0.0263
s
p50
0.337788
s
<
p95
0.341714
s
<
p99
0.342049
s
WASM call duration: 'countersigning::accept_two_party'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.329413
s
SD
0.12842
s
p50
0.205282
s
<
p95
0.47338
s
<
p99
0.474644
s
WASM call duration: 'countersigning::call_remote_signal'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.002245
s
SD
0.000491
s
p50
0.002116
s
<
p95
0.003036
s
<
p99
0.003098
s
WASM call duration: 'countersigning::commit_two_party'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.225947
s
SD
0.039062
s
p50
0.206466
s
<
p95
0.312592
s
<
p99
0.316021
s
WASM call duration: 'countersigning::init'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.007542
s
SD
0.003913
s
p50
0.007081
s
<
p95
0.012549
s
p99
0.012549
s
WASM call duration: 'countersigning::initiator_hello'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.00057
s
SD
0
s
p50
0.00057
s
p95
0.00057
s
p99
0.00057
s
WASM call duration: 'countersigning::list_participants'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.005255
s
SD
4.4e-05
s
p50
0.005241
s
<
p95
0.005318
s
<
p99
0.005324
s
WASM call duration: 'countersigning::participant_hello'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.007619
s
SD
0.002412
s
p50
0.010031
s
p95
0.010031
s
p99
0.010031
s
WASM call duration: 'countersigning::start_two_party'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.333359
s
SD
0.026305
s
p50
0.336823
s
<
p95
0.340735
s
<
p99
0.34109
s
WASM call duration: 'countersigning_integrity::entry_defs'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.001622
s
SD
0.000469
s
p50
0.001633
s
<
p95
0.002471
s
<
p99
0.002482
s
Host function call duration: '__hc__accept_countersigning_preflight_request_1'
Duration of this host function call invoked from within WASM.
mean
0.001827
s
SD
0.000789
s
p50
0.00207
s
<
p95
0.003111
s
<
p99
0.003314
s
Host function call duration: '__hc__agent_info_1'
Duration of this host function call invoked from within WASM.
mean
4e-05
s
SD
1.6e-05
s
p50
4.7e-05
s
<
p95
7.9e-05
s
p99
7.9e-05
s
Host function call duration: '__hc__call_1'
Duration of this host function call invoked from within WASM.
mean
0.323605
s
SD
0.107999
s
p50
0.33054
s
<
p95
0.459024
s
<
p99
0.467712
s
Host function call duration: '__hc__create_1'
Duration of this host function call invoked from within WASM.
mean
0.000594
s
SD
0.000188
s
p50
0.000555
s
<
p95
0.000735
s
p99
0.000735
s
Host function call duration: '__hc__create_link_1'
Duration of this host function call invoked from within WASM.
mean
0.000679
s
SD
0.000158
s
p50
0.000837
s
p95
0.000837
s
p99
0.000837
s
Host function call duration: '__hc__emit_signal_1'
Duration of this host function call invoked from within WASM.
mean
7.7e-05
s
SD
2.4e-05
s
p50
8.2e-05
s
<
p95
0.000113
s
<
p99
0.000126
s
Host function call duration: '__hc__get_agent_activity_1'
Duration of this host function call invoked from within WASM.
mean
0.219564
s
SD
0.039407
s
p50
0.200169
s
<
p95
0.306499
s
<
p99
0.309901
s
Host function call duration: '__hc__get_links_1'
Duration of this host function call invoked from within WASM.
mean
0.000915
s
SD
0.000165
s
p50
0.000876
s
<
p95
0.001166
s
<
p99
0.00117
s
Host function call duration: '__hc__sys_time_1'
Duration of this host function call invoked from within WASM.
mean
6e-06
s
SD
0
s
p50
6e-06
s
p95
6e-06
s
p99
6e-06
s
Host function call duration: '__hc__zome_info_1'
Duration of this host function call invoked from within WASM.
mean
0.002692
s
SD
0.000987
s
p50
0.002875
s
<
p95
0.004467
s
p99
0.004467
s
Emitted local signals
Count of local signals emitted via the emit_signal host function.
total
107
over 280.130325249s
mean rate
28.06
/s
std
19.71
/s
p5
0.12
/s
p95
60.89
/s
peak
60.89
/s
Post-commit duration
Time spent executing post-commit workflows.
mean
0.000339
s
SD
0.000161
s
p50
0.00045
s
<
p95
0.000536
s
<
p99
0.000836
s
Conductor uptime
Conductor uptime gauge. A drop in the trend indicates a restart during the run.
p5
16.87
s
<
mean
150.4
s
SD
85.63
s
<
p95
285.49
s
Integrated ops
Count of DHT ops integrated since the conductor started. Resets on restart.
total
1338
over 298.262709011s
mean rate
5.62
/s
std
3.9
/s
p5
0
/s
p95
9.77
/s
peak
23.56
/s
Integration delay
Delay between an op being stored and being integrated. High values indicate the pipeline is falling behind.
mean
0.179791
s
SD
0.233595
s
p50
0.157706
s
<
p95
0.533858
s
<
p99
1.430532
s
Validation attempts per op
Number of validation attempts required per op. Values consistently above 1 indicate retries.
mean
1.674725
s
SD
0.172483
s
p50
1.73639
s
<
p95
1.748299
s
<
p99
1.755556
s
App validation workflow duration
Time spent running the app validation workflow.
mean
0.007663
s
SD
0.077612
s
p50
0.004266
s
<
p95
0.036364
s
<
p99
0.720258
s
Countersigning workflow duration
Time spent running the countersigning workflow.
mean
0.014055
s
SD
0.024548
s
p50
0.013586
s
<
p95
0.061713
s
<
p99
0.164746
s
Integrate DHT ops workflow duration
Time spent running the integration workflow.
mean
0.001444
s
SD
0.001086
s
p50
0.00138
s
<
p95
0.002278
s
<
p99
0.008731
s
Publish DHT ops workflow duration
Time spent running the publish workflow.
mean
0.004102
s
SD
0.001224
s
p50
0.004309
s
<
p95
0.005833
s
<
p99
0.007468
s
System validation workflow duration
Time spent running the sys validation workflow.
mean
0.03326
s
SD
0.060205
s
p50
0.023866
s
<
p95
0.102937
s
<
p99
0.498193
s
Validation receipt workflow duration
Time spent running the validation receipt workflow.
mean
0.330951
s
SD
0.132245
s
p50
0.27395
s
<
p95
0.528969
s
<
p99
0.601589
s
Witnessing workflow duration
Time spent running the witnessing workflow.
mean
0.00348
s
SD
0.00205
s
p50
0.003969
s
<
p95
0.004901
s
<
p99
0.004965
s
Authored DB connection use time
Time spent holding authored database connections.
mean
0.000364
s
SD
6e-05
s
p50
0.00034
s
<
p95
0.000429
s
<
p99
0.000677
s
DHT DB connection use time
Time spent holding DHT database connections.
mean
0.000408
s
SD
0.000187
s
p50
0.00031
s
<
p95
0.000671
s
<
p99
0.0007
s
Conductor DB connection use time
Time spent holding conductor database connections.
mean
0.000218
s
SD
2.1e-05
s
p50
0.000223
s
<
p95
0.000254
s
<
p99
0.000271
s
Cache DB connection use time
Time spent holding cache database connections.
mean
0.000203
s
SD
6.1e-05
s
p50
0.000188
s
<
p95
0.000298
s
<
p99
0.000427
s
WASM DB connection use time
Time spent holding WASM database connections.
mean
0.000259
s
SD
0.000195
s
p50
0.000186
s
<
p95
0.000523
s
p99
0.000523
s
Peer meta store DB connection use time
Time spent holding peer meta store database connections.
mean
0.000169
s
SD
2.7e-05
s
p50
0.000169
s
<
p95
0.000207
s
<
p99
0.00021
s
Write transaction duration
Duration of exclusive write transactions across all databases.
mean
0.000919
s
SD
0.082316
s
p50
0.001178
s
<
p95
0.284572
s
p99
0.284572
s
Lair keystore: signing
Duration of Ed25519 signing requests to the Lair keystore.
mean
0.000457
s
SD
0.000236
s
p50
0.000456
s
<
p95
0.000597
s
<
p99
0.002112
s
P2P request roundtrip: 'get'
Time spent sending a get request and awaiting its response.
mean
1.489745
s
SD
2.449059
s
p50
0.918618
s
<
p95
5.658244
s
p99
5.658244
s
P2P request roundtrip: 'get_agent_activity'
Time spent sending a get_agent_activity request and awaiting its response.
mean
0.216642
s
SD
0.036441
s
p50
0.194772
s
<
p95
0.297907
s
<
p99
0.309365
s
P2P request roundtrip: 'send_validation_receipts'
Time spent sending a send_validation_receipts request and awaiting its response.
mean
0.300814
s
SD
0.250397
s
p50
0.368205
s
<
p95
1.223256
s
p99
1.223256
s
P2P request roundtrip: 'call_remote'
Time spent sending a call_remote request and awaiting its response.
mean
0.323112
s
SD
0.108059
s
p50
0.330231
s
<
p95
0.458481
s
<
p99
0.467184
s
Host MetricsClick to toggle visibility.
User CPU usage
CPU usage by user space
5.75
%
Network receive rate (primary)
Rate of bytes received on primary network interface
p5
7.95
KiB/s
<
mean
60.40
KiB/s
SD
215.08
KiB/s
p95
40.48
KiB/s
Network send rate (primary)
Rate of bytes sent on primary network interface
p5
5.39
KiB/s
<
mean
15.99
KiB/s
SD
9.02
KiB/s
<
p95
34.51
KiB/s
Total bytes received
Total bytes received on primary network interface
count
52.50
MiB
mean
179.20
KiB/s
Total bytes sent
Total bytes sent on primary network interface
count
13.89
MiB
mean
47.42
KiB/s
CPU spike anomaly
CPU spike anomaly was detected
Detected

Warning CPU p99 reached 91.1%

Memory leak anomaly
Memory leak anomaly was detected
Detected

Warning Memory growing at 104.13 MiB/s

Disk full anomaly
Disk full anomaly was detected
NotDetected
Swap thrashing anomaly
Swap thrashing anomaly was detected
NotDetected
System overload anomaly
System overload anomaly was detected
NotDetected
CPU usage
Total CPU usage and kernel CPU usage
Total
p5
1.13
%
<
mean
8.04
%
SD
13.1
%
<
p95
13.57
%
System
2.28
%
CPU percentiles
CPU usage percentiles
p50
6.65
%
p95
13.57
%
p99
91.14
%
CPU usage above 80%
Number of hosts above 80% CPU and mean time spent above threshold for those hosts
count
1
hosts
mean time
0.03
s
Memory used percentage
Percentage of memory used
p5
8.47
%
<
mean
10.68
%
SD
2.39
%
<
p95
14.07
%
Memory available percentage
Percentage of available memory
p5
85.93
%
<
mean
89.32
%
SD
2.39
%
<
p95
91.53
%
Max host memory used
Maximum memory usage percentage across all hosts
max
14.02
%
Max host swap used percentage
Maximum swap space usage percentage across all hosts
max
0.00
%
Memory growth rate
Rate of memory growth over time
growth
104.13
MiB/s
Disk read throughput
Disk read throughput in MiB/s
0.00
MiB/s
Disk write throughput
Disk write throughput in MiB/s
1.28
MiB/s
Disk space utilization risk
Number of hosts nearing disk space capacity by mount point
Mount Point /
0/2
hosts
System load average
System load averages over 1, 5, and 15 minutes. This is an unnormalised value not divided by number of CPUs, so it is only meaningful if all machines have the same core count.
1 min
0.27
5 min
0.14
15 min
0.06
CPU overloaded hosts
Percentage of hosts that experienced CPU overload
0.00
%
CPU pressure
CPU pressure over 10, 60, and 300 second averages
10 second average
p5
0
%
<
mean
1.2998
%
SD
1.1737
%
<
p95
3.88
%
60 second average
1.1273
%
300 second average
0.5729
%
Memory pressure some
Memory pressure (some tasks stalled) over 10, 60, and 300 second averages
10 second average
p5
0
%
mean
0
%
SD
0
%
p95
0
%
60 second average
0.0000
%
300 second average
0.0000
%
Memory pressure full
Memory pressure (all tasks stalled) over 10, 60, and 300 second averages
10 second average
p5
0
%
mean
0
%
SD
0
%
p95
0
%
60 second average
0.0000
%
300 second average
0.0000
%
I/O pressure some
I/O pressure (some tasks stalled) over 10, 60 and 300 second averages
10 second average
p5
0
%
<
mean
0.1124
%
SD
0.3222
%
<
p95
0.62
%
60 second average
0.1207
%
300 second average
0.0689
%
I/O pressure full
I/O pressure (all tasks stalled) over 10, 60 and 300 second averages
10 second average
p5
0
%
<
mean
0.07
%
SD
0.2248
%
<
p95
0.41
%
60 second average
0.0760
%
300 second average
0.0413
%
Holochain process CPU usage
CPU usage by Holochain process
p5
0.89
%
<
mean
5.67
%
SD
10.76
%
<
p95
7.4
%
Holochain process memory (PSS)
Proportional Set Size memory of Holochain process
p5
214.65
KiB
<
mean
232.03
KiB
SD
26.71
KiB
<
p95
257.31
KiB
Holochain process threads
Number of threads in Holochain process
p5
11
threads
<
mean
14.51
threads
SD
3.36
threads
<
p95
20
threads
Holochain process file descriptors
Number of file descriptors used by Holochain process
p5
64
file descriptors
<
mean
76.3
file descriptors
SD
19.51
file descriptors
<
p95
121
file descriptors

Validation Receipts

↑ Back to index

Creates an entry, wait for required validation receipts, then repeat. Records the amount of time it took to accumulate the required number of receipts for all DHT operations. This is measured to the nearest 20ms so that we don't keep the agent too busy checking for receipts.

Each agent in this scenario waits for a certain number of peers to be available or for up to two minutes, whichever happens first, before starting its behaviour.

By default, this scenario will wait for a complete set of validation receipts before committing the next record. If the NO_VALIDATION_COMPLETE environment variable is set, it will instead publish new records on every round, building up an ever-growing list of action hashes to check on.

Started
Thu, 23 Apr 2026 12:07:36 UTC
Peer count
250
Peer count at end
54532
Behaviours
  • default (1 agent)
Holochain version
0.6.1-rc.4
Wind Tunnel version
0.6.0
Run ID
validation_receipts_24829354002_1
Receipts complete timing
The amount of time between publishing a record and receiving the required number of validation receipts.
Mean
mean
157.17435
s
SD
76.292179
s
Max
Highest per-partition mean latency
max mean
776.536968
s
Min
Lowest per-partition mean latency
min mean
15.023473
s
Receipts complete rate
The number of complete validation receipt sets collected per second.
Mean
mean
0.72
/s
Max
Highest per-partition mean rate
max
1
/s
Min
Lowest per-partition mean rate
min
0
/s
Error count
The total number of errors accumulated during the run.
2
Holochain MetricsClick to toggle visibility.
Cascade fetch errors
Count of network fetch errors during cascade calls.
total
0
over 2055.315762294s
Not enough time data to show a trend.
WASM usage: 'crud::create_sample_entry'
Metered usage count of the WASM ribosome for this zome function.
total
0
over 37340.740215519s
Not enough time data to show a trend.
WASM usage: 'crud::get_sample_entry_validation_receipts'
Metered usage count of the WASM ribosome for this zome function.
total
1092630
over 37340.740215519s
mean rate
9.068937881e+08
/s
std
2.0591021829e+10
/s
p5
0
/s
p95
1.0680787917e+09
/s
peak
1.60835319693e+12
/s
WASM usage: 'crud::init'
Metered usage count of the WASM ribosome for this zome function.
total
0
over 37340.740215519s
Not enough time data to show a trend.
WASM usage: 'crud_integrity::entry_defs'
Metered usage count of the WASM ribosome for this zome function.
total
0
over 37340.740215519s
Not enough time data to show a trend.
Zome call duration: 'crud::create_sample_entry'
Duration of this zome call as measured by Holochain internally.
mean
0.005943
s
SD
0.002847
s
p50
0.006585
s
<
p95
0.011217
s
<
p99
0.016104
s
Zome call duration: 'crud::get_sample_entry_validation_receipts'
Duration of this zome call as measured by Holochain internally.
mean
0.00629
s
SD
0.01087
s
p50
0.004862
s
<
p95
0.008907
s
<
p99
0.014088
s
WASM call duration: 'crud::create_sample_entry'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.004843
s
SD
0.002313
s
p50
0.005361
s
<
p95
0.009415
s
<
p99
0.01282
s
WASM call duration: 'crud::get_sample_entry_validation_receipts'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.004474
s
SD
0.007638
s
p50
0.003245
s
<
p95
0.007022
s
<
p99
0.012052
s
WASM call duration: 'crud::init'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.00274
s
SD
0.00151
s
p50
0.002561
s
<
p95
0.005267
s
<
p99
0.008401
s
WASM call duration: 'crud_integrity::entry_defs'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.001468
s
SD
0.000553
s
p50
0.001574
s
<
p95
0.002361
s
<
p99
0.003193
s
Host function call duration: '__hc__create_1'
Duration of this host function call invoked from within WASM.
mean
0.000561
s
SD
0.000638
s
p50
0.000534
s
<
p95
0.001806
s
<
p99
0.004295
s
Host function call duration: '__hc__get_validation_receipts_1'
Duration of this host function call invoked from within WASM.
mean
0.001978
s
SD
0.004324
s
p50
0.000982
s
<
p95
0.004309
s
<
p99
0.008661
s
Host function call duration: '__hc__zome_info_1'
Duration of this host function call invoked from within WASM.
mean
0.002495
s
SD
0.001188
s
p50
0.002627
s
<
p95
0.004847
s
<
p99
0.0064
s
Post-commit duration
Time spent executing post-commit workflows.
mean
0.00055
s
SD
0.000463
s
p50
0.000514
s
<
p95
0.001543
s
<
p99
0.002476
s
Conductor uptime
Conductor uptime gauge. A drop in the trend indicates a restart during the run.
p5
45.96
s
<
mean
446.93
s
SD
259.99
s
<
p95
856.09
s
Integration delay
Delay between an op being stored and being integrated. High values indicate the pipeline is falling behind.
mean
40.520444
s
SD
26.263828
s
p50
29.442623
s
<
p95
65.409019
s
<
p99
121.763046
s
Validation attempts per op
Number of validation attempts required per op. Values consistently above 1 indicate retries.
mean
2.046712
s
SD
0.228798
s
p50
2.036481
s
<
p95
2.088447
s
<
p99
2.196747
s
App validation workflow duration
Time spent running the app validation workflow.
mean
0.838379
s
SD
3.17272
s
p50
0.951615
s
<
p95
4.101512
s
<
p99
6.349412
s
Countersigning workflow duration
Time spent running the countersigning workflow.
mean
0.003932
s
SD
0.003338
s
p50
0.00336
s
<
p95
0.008235
s
<
p99
0.02073
s
Integrate DHT ops workflow duration
Time spent running the integration workflow.
mean
0.007289
s
SD
0.03024
s
p50
0.004988
s
<
p95
0.020045
s
<
p99
0.075652
s
Publish DHT ops workflow duration
Time spent running the publish workflow.
mean
0.006548
s
SD
0.010519
s
p50
0.006015
s
<
p95
0.012094
s
<
p99
0.018438
s
System validation workflow duration
Time spent running the sys validation workflow.
mean
17.013157
s
SD
16.747345
s
p50
18.836988
s
<
p95
54.192398
s
<
p99
67.78573
s
Validation receipt workflow duration
Time spent running the validation receipt workflow.
mean
11.95469
s
SD
19.98436
s
p50
5.87845
s
<
p95
43.230315
s
<
p99
108.920664
s
Authored DB connection use time
Time spent holding authored database connections.
mean
0.000259
s
SD
0.000276
s
p50
0.000306
s
<
p95
0.000465
s
<
p99
0.000787
s
DHT DB connection use time
Time spent holding DHT database connections.
mean
0.000773
s
SD
0.002537
s
p50
0.000289
s
<
p95
0.001495
s
<
p99
0.002321
s
Conductor DB connection use time
Time spent holding conductor database connections.
mean
0.000174
s
SD
0.000116
s
p50
0.000169
s
<
p95
0.000268
s
<
p99
0.0004
s
Cache DB connection use time
Time spent holding cache database connections.
mean
0.000157
s
SD
0.000174
s
p50
0.000145
s
<
p95
0.000266
s
<
p99
0.000498
s
WASM DB connection use time
Time spent holding WASM database connections.
mean
0.000256
s
SD
0.000244
s
p50
0.000227
s
<
p95
0.000513
s
<
p99
0.001756
s
Peer meta store DB connection use time
Time spent holding peer meta store database connections.
mean
0.000135
s
SD
0.000118
s
p50
0.000132
s
<
p95
0.000215
s
<
p99
0.000278
s
Write transaction duration
Duration of exclusive write transactions across all databases.
mean
0.001766
s
SD
0.104766
s
p50
0.001438
s
<
p95
0.272483
s
<
p99
0.491787
s
Lair keystore: signing
Duration of Ed25519 signing requests to the Lair keystore.
mean
0.000394
s
SD
0.002654
s
p50
0.000443
s
<
p95
0.000741
s
<
p99
0.001717
s
P2P request roundtrip: 'get'
Time spent sending a get request and awaiting its response.
mean
1.500023
s
SD
29.616863
s
p50
0.905725
s
<
p95
60.00278
s
<
p99
60.035955
s
P2P request roundtrip: 'send_validation_receipts'
Time spent sending a send_validation_receipts request and awaiting its response.
mean
0.607052
s
SD
29.083131
s
p50
0.509212
s
<
p95
60.001629
s
<
p99
60.002109
s
Host MetricsClick to toggle visibility.
User CPU usage
CPU usage by user space
6.77
%
Network receive rate (primary)
Rate of bytes received on primary network interface
p5
5.54
KiB/s
<
mean
559.20
KiB/s
SD
580.60
KiB/s
<
p95
1.57
MiB/s
Network send rate (primary)
Rate of bytes sent on primary network interface
p5
2.84
KiB/s
<
mean
68.27
KiB/s
SD
64.21
KiB/s
<
p95
188.43
KiB/s
Total bytes received
Total bytes received on primary network interface
count
122.70
GiB
mean
7.84
MiB/s
Total bytes sent
Total bytes sent on primary network interface
count
14.84
GiB
mean
971.51
KiB/s
CPU spike anomaly
CPU spike anomaly was detected
NotDetected
Memory leak anomaly
Memory leak anomaly was detected
Detected

Warning Memory growing at 763.06 MiB/s

Disk full anomaly
Disk full anomaly was detected
NotDetected
Swap thrashing anomaly
Swap thrashing anomaly was detected
Detected

Critical Heavy swap usage (50.5% swap used)

System overload anomaly
System overload anomaly was detected
Detected

Warning 1% of hosts overloaded (load5/ncpus > 1.0)

CPU usage
Total CPU usage and kernel CPU usage
Total
p5
1.96
%
<
mean
10.35
%
SD
12.57
%
<
p95
30.97
%
System
3.58
%
CPU percentiles
CPU usage percentiles
p50
7.14
%
p95
30.97
%
p99
79.74
%
CPU usage above 80%
Number of hosts above 80% CPU and mean time spent above threshold for those hosts
count
144
hosts
mean time
0.02
s
Memory used percentage
Percentage of memory used
p5
6.4
%
<
mean
10.59
%
SD
6.87
%
<
p95
14.65
%
Memory available percentage
Percentage of available memory
p5
85.35
%
<
mean
89.41
%
SD
6.87
%
<
p95
93.6
%
Max host memory used
Maximum memory usage percentage across all hosts
max
74.63
%
Max host swap used percentage
Maximum swap space usage percentage across all hosts
max
50.46
%
Memory growth rate
Rate of memory growth over time
growth
763.06
MiB/s
Disk read throughput
Disk read throughput in MiB/s
0.04
MiB/s
Disk write throughput
Disk write throughput in MiB/s
60.59
MiB/s
Disk space utilization risk
Number of hosts nearing disk space capacity by mount point
Mount Point /
0/200
hosts
Mount Point /efi-boot
0/7
hosts
Mount Point /etc/hostname
0/14
hosts
Mount Point /etc/hosts
0/14
hosts
Mount Point /etc/resolv.conf
0/14
hosts
Mount Point /nix/store
0/13
hosts
System load average
System load averages over 1, 5, and 15 minutes. This is an unnormalised value not divided by number of CPUs, so it is only meaningful if all machines have the same core count.
1 min
0.34
5 min
0.29
15 min
0.42
CPU overloaded hosts
Percentage of hosts that experienced CPU overload
1.32
%
CPU pressure
CPU pressure over 10, 60, and 300 second averages
10 second average
p5
0
%
<
mean
2.046
%
SD
5.3136
%
<
p95
6.58
%
60 second average
2.0086
%
300 second average
1.6811
%
Memory pressure some
Memory pressure (some tasks stalled) over 10, 60, and 300 second averages
10 second average
p5
0
%
mean
0
%
SD
0
%
p95
0
%
60 second average
0.0000
%
300 second average
0.0000
%
Memory pressure full
Memory pressure (all tasks stalled) over 10, 60, and 300 second averages
10 second average
p5
0
%
mean
0
%
SD
0
%
p95
0
%
60 second average
0.0000
%
300 second average
0.0000
%
I/O pressure some
I/O pressure (some tasks stalled) over 10, 60 and 300 second averages
10 second average
p5
0
%
<
mean
0.8754
%
SD
3.9188
%
<
p95
3.83
%
60 second average
0.8511
%
300 second average
0.6928
%
I/O pressure full
I/O pressure (all tasks stalled) over 10, 60 and 300 second averages
10 second average
p5
0
%
<
mean
0.703
%
SD
3.3815
%
<
p95
3.09
%
60 second average
0.6798
%
300 second average
0.5558
%
Holochain process CPU usage
CPU usage by Holochain process
p5
0.5
%
<
mean
7.94
%
SD
11.23
%
<
p95
22.59
%
Holochain process memory (PSS)
Proportional Set Size memory of Holochain process
p5
229.46
KiB
<
mean
252.96
KiB
SD
25.34
KiB
<
p95
282.08
KiB
Holochain process threads
Number of threads in Holochain process
p5
13
threads
<
mean
29.14
threads
SD
13.58
threads
<
p95
58
threads
Holochain process file descriptors
Number of file descriptors used by Holochain process
p5
55
file descriptors
<
mean
75.69
file descriptors
SD
16.6
file descriptors
<
p95
95
file descriptors

Write/get_agent_activity with volatile conductors

↑ Back to index

A scenario where write peers write entries, while get_agent_activity_volatile peers each query a single write agents activity with get_agent_activity but shutdown and restart their conductors at semi-random intervals.

Before a target write peer and the requesting get_agent_activity_volatile peer are in sync, this will measure the get_agent_activity call performance over a network. Once a write peer reaches sync with a get_agent_activity peer, the write peer will publish their actions and entries, and so the get_agent_activity calls will likely have most of the data they need locally. At that point this measures the database query performance and code paths through host functions.

Started
Thu, 23 Apr 2026 13:49:00 UTC
Peer count
250
Peer count at end
55030
Behaviours
  • get_agent_activity_volatile (1 agent)
  • write (1 agent)
Holochain version
0.6.1-rc.4
Wind Tunnel version
0.6.0
Run ID
write_get_agent_activity_volatile_24829354002_1
Highest observed action_seq
The rate at which zero-arc readers observe new chain heads on the writers' chains via get_agent_activity. This reflects the DHT's ability to propagate agent activity ops and make them available to querying peers.
Mean
mean
63.13
/s
Max
Highest per-partition mean rate
max
517
/s
Min
Lowest per-partition mean rate
min
0
/s
get_agent_activity_full zome call timing
The time taken to call the zome function that retrieves information on a write peer's source chain.
Mean
mean
0.289021
s
SD
0.627568
s
Max
Highest per-partition mean latency
max mean
1.086991
s
Min
Lowest per-partition mean latency
min mean
0.00725
s
Running volatile conductors count
The number of conductors running by get_agent_activity_volatile peers.
p5
42
conductors
<
mean
90.33
conductors
SD
16.53
conductors
<
p95
101
conductors
Volatile conductor total running duration
The total running duration of a get_agent_activity_volatile conductor.
Mean
mean
711.230402
s
SD
771.181658
s
Max
Highest per-partition mean latency
max mean
1076.181818
s
Min
Lowest per-partition mean latency
min mean
24
s
Volatile conductor running duration
The duration that a get_agent_activity_volatile conductor was running before being stopped.
Mean
mean
119.350542
s
SD
80.232135
s
Max
Highest per-partition mean latency
max mean
163.120006
s
Min
Lowest per-partition mean latency
min mean
24.005821
s
Volatile conductor stopped duration
The duration that a get_agent_activity_volatile conductor was stopped before being started again.
Mean
mean
104.037331
s
SD
93.677474
s
Max
Highest per-partition mean latency
max mean
158.204407
s
Min
Lowest per-partition mean latency
min mean
8.990616
s
Reached target arc
Did get_agent_activity_volatile peer reach their target arc in the moment before they were shutdown.
  • get_agent_activity_volatile_agent: uhCAk-0TQ2eLgIh1HKeC3-joorOGfj_focakZgdbjxWSbSgfQoCgQ
p5
1
mean
1
SD
0
p95
1
Not enough time data to show a trend.
  • get_agent_activity_volatile_agent: uhCAk024mNV9pfyYk892f7VXNboNeD3WwyiYdXPGDKlbOhO743Gom
p5
0
<
mean
0.71
SD
0.45
<
p95
1
  • get_agent_activity_volatile_agent: uhCAk18KlvZtYBXcm02bV5aO29_G5vJMk4cnChBjhxwAcJLvdECRf
p5
0
<
mean
0.3
SD
0.46
<
p95
1
  • get_agent_activity_volatile_agent: uhCAk1l_h2v4r_7icmDJpBQbJOmjaHafMBrxq3d0jJEea-3uJvelK
p5
0
<
mean
0.5
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAk23Hdr-j0S9KQdqrW69jQK2CHnYAYSAxr5Mj2pxxRFwSnlPcX
p5
0
<
mean
0.58
SD
0.49
<
p95
1
  • get_agent_activity_volatile_agent: uhCAk3UsTIYW9HJZBSxVGtR5jljJzakSg3F3EQ-33JHmG6kG-kXFU
p5
0
<
mean
0.36
SD
0.48
<
p95
1
  • get_agent_activity_volatile_agent: uhCAk7Q_elnrXuFQWpVkEx2vAIvMQEojkJB1PxONnkQzKAT5xTM_B
p5
1
mean
1
SD
0
p95
1
  • get_agent_activity_volatile_agent: uhCAk7SvksOYMNgn-DbdKioBrtl10M050jQw68pUZcANLAvpDUFPV
p5
0
<
mean
0.6
SD
0.49
<
p95
1
  • get_agent_activity_volatile_agent: uhCAk8mhyVO-hv3Q2TsF1KnzXgzAuI_oGas455B5hW_m_x-Np14DR
p5
0
<
mean
0.42
SD
0.49
<
p95
1
  • get_agent_activity_volatile_agent: uhCAk9gLO5e1e3FBQGxQrHFs5T280NzpHD6ba6Qe0XKYeOue8lox1
p5
0
<
mean
0.42
SD
0.49
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkAeD7qPj-Nyt_ouSSELyHnM2y3Hhj2K2v70eWsakrnriaO7mj
p5
0
<
mean
0.42
SD
0.49
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkApHDh5RLWQuE_D0BQlPPG0s07fZJA5EYJRBhQshK8nWR5cHy
p5
0
mean
0
SD
0
p95
0
  • get_agent_activity_volatile_agent: uhCAkAsYvGrjKxUALLzvh1OJZnuLbr4ePfX38-P7oLzYKBMC8Wbx6
p5
0
<
mean
0.5
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkCzQgIJK0IuEZBm8NkdDAbgc_0RQafoXciQ47HXsV_56rD0Lq
p5
0
<
mean
0.64
SD
0.48
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkD-oiWGT3dJB2pnYGqSIIP-PvEIgLLW_C0FstPV-Edc7NMfgU
p5
0
<
mean
0.55
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkDx6P8_xGfSVozjIX8N5wrrn4xlxTsceegd56K_kRcs6l5Lye
p5
0
<
mean
0.5
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkEJ-Ih9VHnnYRk5dn_MumkqyMSL7i0YAvnkc72mskbwpx-Z4n
p5
0
<
mean
0.25
SD
0.43
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkF2vZTDw2ZnngiJck0kYShGdf7oFENxuOtOmy6NttR5GhOESn
p5
0
<
mean
0.44
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkFAAAtKAyrwJYrjYcJpwlzGTkHwobYFPAz0de1xSOUlF7wCim
p5
0
<
mean
0.45
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkHIZnl8yllcj3PbpyO7bmGfSrQdNu4V1Z4J_5Pm03gt1UwYpd
p5
0
<
mean
0.4
SD
0.49
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkIVO2l1yqCSboSla-yUzRlJpanWG0MdjluV3gW5Aw31hCBeGt
p5
0
<
mean
0.42
SD
0.49
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkJ56xr_o_QeXe-xFbN0oeIQ_4ahhhoi6YHe5WljrwQn-jq1Sh
p5
1
mean
1
SD
0
p95
1
Not enough time data to show a trend.
  • get_agent_activity_volatile_agent: uhCAkJIc9Q4MVNVzWOv95EwLEeFd0b3_Nn2Y4t9OQebHZz8s4L08C
p5
0
<
mean
0.56
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkJJYq3K3cdKFwBaSazdUjOFVtWsqzLnhdVOVIWMNIEKDlIV8C
p5
0
<
mean
0.45
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkKSZx4A4U9xwifo5zebIiCVrn3J_wjGu9BsJfshMbFpfNAqIn
p5
0
<
mean
0.27
SD
0.44
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkKt65asxfovEwnQ8jWIbDVJo14O7oucAK3CGZ3bryEQnIieWh
p5
0
<
mean
0.5
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkNgnytCFR0RXpURCp1XB5OanHEjk0Ugh3r905VOKsbjK5BbPA
p5
0
<
mean
0.27
SD
0.45
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkO7mzrJFSTKZnUyXLas8hma6JTpltFHyt-iad3dcf7LtoN02p
p5
1
mean
1
SD
0
p95
1
  • get_agent_activity_volatile_agent: uhCAkQKd_N5Qy56dfVM2PN6LVaGm6FBdprDkjxT7DSYCY5kk93Rj2
p5
0
<
mean
0.38
SD
0.48
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkRsv95emBlJDQNm2YjJLnB5G4USk3bmN6t_K7D3SOb9PnnYf7
p5
0
<
mean
0.27
SD
0.45
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkS81YPXhfR_1xVLVscJHCc3B4W6xRmHn6ZqptvnM0YNjWr-Af
p5
0
<
mean
0.29
SD
0.45
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkSS2JG3DVOrFBdNMIznSqEPIbQYZXWk0ETUuopKizs6f9-t64
p5
1
mean
1
SD
0
p95
1
  • get_agent_activity_volatile_agent: uhCAkSxc7NcBWtKh8caZ_uzb8DS0oD1WrW3qkU0RfP4I7vmaHfLXn
p5
0
<
mean
0.42
SD
0.49
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkUIghI8yjvf9CapIyf8Xu4_BUKTu-oWQxnEvt4CeubPcx9zlF
p5
0
<
mean
0.57
SD
0.49
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkV4t-Oy4i0XD3vhYV52FRQW-dK7DAEQXExnPR1qFKym_yEU7q
p5
0
<
mean
0.36
SD
0.48
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkVkiN-CzLZYsi-6z4E_B10s5RXrNaRFziGO-jFNz2yniX5G35
p5
0
<
mean
0.2
SD
0.4
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkVl00BprYZHCZD26cOojf1UwnE7PiolR9Re-G8GzlBujv1Mrj
p5
0
<
mean
0.23
SD
0.42
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkWma-I2ERzu02iMtXuscvml-a2FUeUQKnumfg0V__z-FNv36M
p5
0
<
mean
0.44
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkX9Wbomh2RALuhOF3ytr0ebBGfIorn-aYrwS4gBsSQXRwnivH
p5
0
<
mean
0.29
SD
0.45
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkY5LgPbQgA2T57kLGel0W8NE9ju5rXUXCUjH0MAUxro4qijwC
p5
0
<
mean
0.38
SD
0.49
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkYup299LEGozyCCtNuCpnHmAbWYB9B5lLqJWHGNgGUQ1ZQ9xN
p5
0
<
mean
0.4
SD
0.49
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkZsxFxIu1jtnwThunYfTPKdf0QRhL3RemtCZPvh1Z8gKVCfCp
p5
0
<
mean
0.33
SD
0.47
<
p95
1
  • get_agent_activity_volatile_agent: uhCAk_MAYgdVMZNNTeoOJXU8jsWLAZ_QnhM1fjfld52fuziDMddLH
p5
0
<
mean
0.64
SD
0.48
<
p95
1
  • get_agent_activity_volatile_agent: uhCAka43hk0YZ2M63SbdLqhypmyMp5kFClrnD0Oi13HFM2KL2R82s
p5
0
<
mean
0.5
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkb9BAAUXPF5bvYpLz_yH2lnqSdJpjoGiSRIgHMgwgpa_uwwuW
p5
0
<
mean
0.38
SD
0.49
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkby5-OAqLo0ZTHNhWp7HRGlTlQ6VWF5rFgySbOWaE8aALyS6b
p5
0
<
mean
0.5
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkcK5ocrGEiwjYipJBwjL0HJ5VSvgSWlr7pLleqOxTvyfLH4Sf
p5
0
<
mean
0.2
SD
0.4
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkcRcNIYMNtlBo_ZXU9FQnQknfCad40MCsuF5coUxFby4ilXrw
p5
0
<
mean
0.5
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkdSFVVS0dGLizz4B9yEPPU-CzWmIzHqlsfaIliNbUDjukTOgb
p5
0
<
mean
0.45
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkdt7u7k-tSsNT2NYMeIMeSU89IkEzc9czP5prPMfXj_gDjacP
p5
0
<
mean
0.54
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkev8Jjt0XNhDVqI0ZJ1ZzPpPPtXD-Fdvf5xTonrl35tc7vbYu
p5
0
<
mean
0.5
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkevQlEXXhnsr5dBLljDuO5lDRPHyvjOe0Zs7sNLmOV7UOcSOb
p5
0
<
mean
0.64
SD
0.48
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkf6Ob9hOIoaJrTDAj9HERz9BE8a8wt5A8bYTAUPNVBeY4uMAO
p5
0
<
mean
0.36
SD
0.48
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkf9m-Lp-58iw9NuELFtWi-wAtBYu_i5cJrwOt9pj9dQSXKJLE
p5
0
<
mean
0.5
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkg5wLVKsq-2lTI2lHqCSlsxO14fo8Obt4QFVSeSg4T-VdqaTs
p5
0
<
mean
0.5
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkgNxsDtzioQcSw2FXrhSxryUxFTNEkkwQSqpNCM_c4bEchVCS
p5
0
<
mean
0.36
SD
0.48
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkh0I6u_LWwG2Knv4urFzcM3NgooT2KcB2Y3NadnFtR-ZzyKJU
p5
0
<
mean
0.55
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkhX9JEOHdxKlyNQcU4-4neC5geEioYFWjSL2Xu_AWYVZMo-kG
p5
0
<
mean
0.43
SD
0.49
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkiIohrovFi_YHyBMnLCW4luGmhl1pAPuxFN2F8to_fobyAA9i
p5
0
<
mean
0.58
SD
0.49
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkky4WGV3VptqrKwK2ct21v0Euqga2GySqojMXBBZPFkBmnJlC
p5
0
<
mean
0.67
SD
0.47
<
p95
1
  • get_agent_activity_volatile_agent: uhCAknSdi0CuDOQbpBYcyxQCarE5-Dk7bmNb1U0_yIZsT2hq7WhN-
p5
0
<
mean
0.58
SD
0.49
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkoHs9xnWC1RA65ec5HuQeGAzUu-Dgd_CeCDCA4QAwlUVWG30E
p5
0
<
mean
0.64
SD
0.48
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkqymy8hqzkqnLH5k9B2Kciu6q77fGXP_U6xf-jvHT23xv0v5-
p5
0
<
mean
0.45
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkrh8hNRK1zvVqdxH12dTJj_H4tJw9ZWcrTTXzenObugn-8PWX
p5
0
<
mean
0.5
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAks4nfSHhwlEYGbFapWRpxrsc2LmAeR8JRqdUfjwOUVLkTgecS
p5
0
<
mean
0.5
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAks9TrgFvcQfSioxxKp9jUxqDuI3G8kPb32pwgZ-itUz-JgLFr
p5
0
<
mean
0.23
SD
0.42
<
p95
1
  • get_agent_activity_volatile_agent: uhCAksgGIHlOcGcwkqN7kd2r8HgKXFpdPgemcuTgAHbHXYM1ffJXZ
p5
0
<
mean
0.43
SD
0.49
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkt2b4ylDQ1u62GU9SDlbP5HDujtOEUrRcG3N89HYvVlJG2kML
p5
0
<
mean
0.53
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAktAYDc_f_mUa84VNckPqRrxmMqDmgaLVdqlZaz_AgV1aBw4U_
p5
0
<
mean
0.33
SD
0.47
<
p95
1
  • get_agent_activity_volatile_agent: uhCAktrP5hlO-wDyNvsZQzhP7B1tO10CW_OXbSYiyYcTjRqsbyPF1
p5
0
<
mean
0.4
SD
0.49
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkuXnDn7devow1QyU4dL-B1WUcJfY3yWUZ3sr-bFImODH5eGzE
p5
0
<
mean
0.42
SD
0.49
<
p95
1
  • get_agent_activity_volatile_agent: uhCAku_oGfhpckMYN_vScScxj_KSYFhN3Hrz3pj3nwvCggFUDRDq4
p5
0
<
mean
0.2
SD
0.4
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkujvN1iNFYYnNlTQP_FgBD6eKlnqYgdbvEEwJdEZgDEWhECwl
p5
0
<
mean
0.58
SD
0.49
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkvMDYaQ3zEAIFbxeR75FOpKe5KMQdaiC8kasj5RloDlt1Ic4O
p5
0
<
mean
0.44
SD
0.5
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkvVomzvw4bIMzFny_A0JO4OfROTteuMkI0RsqtWU67xNiUo_D
p5
0
<
mean
0.42
SD
0.49
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkyLElX_fQnExzB_VyIbS5HowNVPjfn-RESvIQO9RFAvWKPz4X
p5
0
<
mean
0.42
SD
0.49
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkymSMoTD9pc7nn7vMqEvELFe5h53930uCgQmNd44DMUNnc1eO
p5
0
<
mean
0.36
SD
0.48
<
p95
1
  • get_agent_activity_volatile_agent: uhCAkzA9f-aFpStwVylkN654ECyh526w2JxYeXBc4d4Q-17HelopP
p5
1
mean
1
SD
0
p95
1
Not enough time data to show a trend.
Error count
The number of errors accumulated.
828
Holochain MetricsClick to toggle visibility.
Cascade duration
Time taken to execute a cascade (get) query inside Holochain.
mean
0.000899
s
SD
0.003533
s
p50
0.001397
s
<
p95
0.00873
s
<
p99
0.017242
s
Zome call duration: 'agent_activity::announce_write_behaviour'
Duration of this zome call as measured by Holochain internally.
mean
0.007909
s
SD
0.004204
s
p50
0.007312
s
<
p95
0.017056
s
<
p99
0.025031
s
Zome call duration: 'agent_activity::create_sample_entry'
Duration of this zome call as measured by Holochain internally.
mean
0.007027
s
SD
0.001824
s
p50
0.007739
s
<
p95
0.009517
s
<
p99
0.013298
s
Zome call duration: 'agent_activity::get_agent_activity_full'
Duration of this zome call as measured by Holochain internally.
mean
0.149238
s
SD
2.813438
s
p50
0.383607
s
<
p95
2.497794
s
<
p99
5.569625
s
Zome call duration: 'agent_activity::get_random_agent_with_write_behaviour'
Duration of this zome call as measured by Holochain internally.
mean
0.00824
s
SD
2.38251
s
p50
0.470932
s
<
p95
3.319036
s
<
p99
5.308022
s
WASM call duration: 'agent_activity::announce_write_behaviour'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.006564
s
SD
0.003754
s
p50
0.006156
s
<
p95
0.012587
s
<
p99
0.022356
s
WASM call duration: 'agent_activity::create_sample_entry'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.005503
s
SD
0.001466
s
p50
0.006062
s
<
p95
0.007702
s
<
p99
0.010523
s
WASM call duration: 'agent_activity::get_agent_activity_full'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
1.139027
s
SD
67.882292
s
p50
0.791319
s
<
p95
149.91187
s
<
p99
359.906338
s
WASM call duration: 'agent_activity::get_random_agent_with_write_behaviour'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.021979
s
SD
19.334676
s
p50
0.552181
s
<
p95
59.863941
s
<
p99
60.00796
s
WASM call duration: 'agent_activity_integrity::entry_defs'
Duration of inner WASM calls for this function, excluding Holochain overhead.
mean
0.001679
s
SD
0.000508
s
p50
0.001831
s
<
p95
0.002502
s
<
p99
0.003096
s
Host function call duration: '__hc__agent_info_1'
Duration of this host function call invoked from within WASM.
mean
5.8e-05
s
SD
3.3e-05
s
p50
5.1e-05
s
<
p95
0.00013
s
<
p99
0.000175
s
Host function call duration: '__hc__create_1'
Duration of this host function call invoked from within WASM.
mean
0.00062
s
SD
0.000178
s
p50
0.000652
s
<
p95
0.00085
s
<
p99
0.00141
s
Host function call duration: '__hc__create_link_1'
Duration of this host function call invoked from within WASM.
mean
0.000672
s
SD
0.000369
s
p50
0.000587
s
<
p95
0.001497
s
<
p99
0.001808
s
Host function call duration: '__hc__get_agent_activity_1'
Duration of this host function call invoked from within WASM.
mean
1.13699
s
SD
67.882253
s
p50
0.790397
s
<
p95
149.909895
s
<
p99
359.90406
s
Host function call duration: '__hc__get_links_1'
Duration of this host function call invoked from within WASM.
mean
0.018212
s
SD
19.334566
s
p50
0.549846
s
<
p95
59.857168
s
<
p99
60.005297
s
Host function call duration: '__hc__random_bytes_1'
Duration of this host function call invoked from within WASM.
mean
1.5e-05
s
SD
1.3e-05
s
p50
2.2e-05
s
<
p95
5.3e-05
s
<
p99
7.3e-05
s
Host function call duration: '__hc__zome_info_1'
Duration of this host function call invoked from within WASM.
mean
0.002888
s
SD
0.000895
s
p50
0.002869
s
<
p95
0.004023
s
<
p99
0.005697
s
Post-commit duration
Time spent executing post-commit workflows.
mean
0.000519
s
SD
0.000195
s
p50
0.00057
s
<
p95
0.000793
s
<
p99
0.001017
s
Conductor uptime
Conductor uptime gauge. A drop in the trend indicates a restart during the run.
p5
8.05
s
<
mean
349.48
s
SD
471.86
s
<
p95
1516.19
s
Integrated ops
Count of DHT ops integrated since the conductor started. Resets on restart.
total
25193
over 38241.785513491s
mean rate
2.82280896e+06
/s
std
7.464428606e+07
/s
p5
0
/s
p95
3.6386374e+06
/s
peak
8.08242280285e+09
/s
Integration delay
Delay between an op being stored and being integrated. High values indicate the pipeline is falling behind.
mean
91.643635
s
SD
171.292517
s
p50
26.914381
s
<
p95
477.987312
s
<
p99
799.781918
s
Validation attempts per op
Number of validation attempts required per op. Values consistently above 1 indicate retries.
mean
1.430423
s
SD
1.77537
s
p50
1.461686
s
<
p95
5.5
s
<
p99
9.6
s
App validation workflow duration
Time spent running the app validation workflow.
mean
7.962225
s
SD
9.667577
s
p50
0.065222
s
<
p95
22.330226
s
<
p99
50.383945
s
Countersigning workflow duration
Time spent running the countersigning workflow.
mean
0.005918
s
SD
0.005094
s
p50
0.00464
s
<
p95
0.013334
s
<
p99
0.026217
s
Integrate DHT ops workflow duration
Time spent running the integration workflow.
mean
0.016607
s
SD
0.020671
s
p50
0.005685
s
<
p95
0.034431
s
<
p99
0.069987
s
Publish DHT ops workflow duration
Time spent running the publish workflow.
mean
0.63319
s
SD
1.190017
s
p50
0.012781
s
<
p95
2.683803
s
<
p99
6.116747
s
System validation workflow duration
Time spent running the sys validation workflow.
mean
44.904215
s
SD
49.190121
s
p50
35.052699
s
<
p95
149.450253
s
<
p99
213.152289
s
Validation receipt workflow duration
Time spent running the validation receipt workflow.
mean
0.755934
s
SD
30.9312
s
p50
0.758365
s
<
p95
61.956536
s
<
p99
162.77925
s
Authored DB connection use time
Time spent holding authored database connections.
mean
0.00338
s
SD
0.001699
s
p50
0.000746
s
<
p95
0.004799
s
<
p99
0.006852
s
DHT DB connection use time
Time spent holding DHT database connections.
mean
0.003172
s
SD
0.001617
s
p50
0.000857
s
<
p95
0.004965
s
<
p99
0.007994
s
Conductor DB connection use time
Time spent holding conductor database connections.
mean
0.000225
s
SD
0.000109
s
p50
0.000153
s
<
p95
0.000299
s
<
p99
0.000603
s
Cache DB connection use time
Time spent holding cache database connections.
mean
0.000165
s
SD
0.000248
s
p50
0.000177
s
<
p95
0.000497
s
<
p99
0.001058
s
WASM DB connection use time
Time spent holding WASM database connections.
mean
0.022855
s
SD
0.019726
s
p50
0.02413
s
<
p95
0.051981
s
<
p99
0.072426
s
Peer meta store DB connection use time
Time spent holding peer meta store database connections.
mean
0.000129
s
SD
0.000285
s
p50
0.000283
s
<
p95
0.000781
s
<
p99
0.00116
s
Write transaction duration
Duration of exclusive write transactions across all databases.
mean
0.004152
s
SD
0.071248
s
p50
0.001972
s
<
p95
0.171665
s
<
p99
0.39239
s
Lair keystore: signing
Duration of Ed25519 signing requests to the Lair keystore.
mean
0.000382
s
SD
0.000494
s
p50
0.000459
s
<
p95
0.001256
s
<
p99
0.002556
s
P2P request roundtrip: 'get'
Time spent sending a get request and awaiting its response.
mean
2.370779
s
SD
28.256399
s
p50
0.982875
s
<
p95
60.002526
s
<
p99
60.006007
s
P2P request roundtrip: 'get_links'
Time spent sending a get_links request and awaiting its response.
mean
1.621343
s
SD
5.776018
s
p50
0.953943
s
<
p95
3.275394
s
<
p99
13.240577
s
P2P request roundtrip: 'get_agent_activity'
Time spent sending a get_agent_activity request and awaiting its response.
mean
0.544742
s
SD
23.521862
s
p50
0.435653
s
<
p95
60.001367
s
<
p99
119.562822
s
P2P request roundtrip: 'send_validation_receipts'
Time spent sending a send_validation_receipts request and awaiting its response.
mean
1.011479
s
SD
28.131557
s
p50
0.602548
s
<
p95
60.001707
s
<
p99
60.003331
s
Host MetricsClick to toggle visibility.
User CPU usage
CPU usage by user space
12.09
%
Network receive rate (primary)
Rate of bytes received on primary network interface
p5
4.75
KiB/s
<
mean
226.08
KiB/s
SD
314.46
KiB/s
<
p95
910.80
KiB/s
Network send rate (primary)
Rate of bytes sent on primary network interface
p5
1.27
KiB/s
<
mean
23.79
KiB/s
SD
61.19
KiB/s
<
p95
63.78
KiB/s
Total bytes received
Total bytes received on primary network interface
count
76.46
GiB
mean
10.41
MiB/s
Total bytes sent
Total bytes sent on primary network interface
count
7.99
GiB
mean
1.09
MiB/s
CPU spike anomaly
CPU spike anomaly was detected
NotDetected
Memory leak anomaly
Memory leak anomaly was detected
Detected

Warning Memory growing at 935.71 MiB/s

Disk full anomaly
Disk full anomaly was detected
NotDetected
Swap thrashing anomaly
Swap thrashing anomaly was detected
NotDetected
System overload anomaly
System overload anomaly was detected
Detected

Warning 0% of hosts overloaded (load5/ncpus > 1.0)

CPU usage
Total CPU usage and kernel CPU usage
Total
p5
1.41
%
<
mean
17.75
%
SD
23.22
%
<
p95
68.08
%
System
5.66
%
CPU percentiles
CPU usage percentiles
p50
4.38
%
p95
68.08
%
p99
87.23
%
CPU usage above 80%
Number of hosts above 80% CPU and mean time spent above threshold for those hosts
count
175
hosts
mean time
0.02
s
Memory used percentage
Percentage of memory used
p5
6.41
%
<
mean
9.41
%
SD
5.29
%
<
p95
13.36
%
Memory available percentage
Percentage of available memory
p5
86.64
%
<
mean
90.59
%
SD
5.29
%
<
p95
93.59
%
Max host memory used
Maximum memory usage percentage across all hosts
max
75.98
%
Max host swap used percentage
Maximum swap space usage percentage across all hosts
max
0.14
%
Memory growth rate
Rate of memory growth over time
growth
935.71
MiB/s
Disk read throughput
Disk read throughput in MiB/s
0.26
MiB/s
Disk write throughput
Disk write throughput in MiB/s
75.56
MiB/s
Disk space utilization risk
Number of hosts nearing disk space capacity by mount point
Mount Point /
0/215
hosts
Mount Point /efi-boot
0/5
hosts
Mount Point /etc/hostname
0/9
hosts
Mount Point /etc/hosts
0/9
hosts
Mount Point /etc/resolv.conf
0/9
hosts
Mount Point /nix/store
0/9
hosts
System load average
System load averages over 1, 5, and 15 minutes. This is an unnormalised value not divided by number of CPUs, so it is only meaningful if all machines have the same core count.
1 min
0.57
5 min
0.48
15 min
0.31
CPU overloaded hosts
Percentage of hosts that experienced CPU overload
0.43
%
CPU pressure
CPU pressure over 10, 60, and 300 second averages
10 second average
p5
0
%
<
mean
4.5114
%
SD
9.7683
%
<
p95
23.9
%
60 second average
4.4499
%
300 second average
3.8555
%
Memory pressure some
Memory pressure (some tasks stalled) over 10, 60, and 300 second averages
10 second average
p5
0
%
mean
0
%
SD
0
%
p95
0
%
60 second average
0.0000
%
300 second average
0.0000
%
Memory pressure full
Memory pressure (all tasks stalled) over 10, 60, and 300 second averages
10 second average
p5
0
%
mean
0
%
SD
0
%
p95
0
%
60 second average
0.0000
%
300 second average
0.0000
%
I/O pressure some
I/O pressure (some tasks stalled) over 10, 60 and 300 second averages
10 second average
p5
0
%
<
mean
2.0447
%
SD
5.9734
%
<
p95
10.4
%
60 second average
1.9898
%
300 second average
1.6355
%
I/O pressure full
I/O pressure (all tasks stalled) over 10, 60 and 300 second averages
10 second average
p5
0
%
<
mean
1.7148
%
SD
5.5057
%
<
p95
8.68
%
60 second average
1.6628
%
300 second average
1.3605
%
Holochain process CPU usage
CPU usage by Holochain process
p5
0.23
%
<
mean
14.42
%
SD
21.93
%
<
p95
62.71
%
Holochain process memory (PSS)
Proportional Set Size memory of Holochain process
p5
105.62
KiB
<
mean
179.80
KiB
SD
94.09
KiB
<
p95
352.48
KiB
Holochain process threads
Number of threads in Holochain process
p5
11
threads
<
mean
23.16
threads
SD
11.25
threads
<
p95
38
threads
Holochain process file descriptors
Number of file descriptors used by Holochain process
p5
46
file descriptors
<
mean
70.31
file descriptors
SD
19.72
file descriptors
<
p95
96
file descriptors