DHT Sync Lag

Measure lag time between an agent publishing data and other peers being able to see it. This scenario has two roles:

  • write: A simple job that just creates entries with a timestamp field. Those entries are linked to a known base hash. For each write, the metric ws.custom.dht_sync_sent_count is incremented.
  • record_lag: A job that repeatedly queries for links from the known base hash. It keeps track of records that it has seen and when a new record is found, and calculates the time difference between the timestamp of the new record and the current time. That time difference is then recorded as a custom metric called wt.custom.dht_sync_lag.

After each behaviour loop the metric ws.custom.dht_sync_recv_count is incremented.

Started
Sat, 21 Mar 2026 18:41:19 UTC
Peer count
250
Peer count at end
5225
Behaviours
  • record_lag (1 agent)
  • write (1 agent)
Holochain version
0.6.1-rc.3
Wind Tunnel version
0.6.0
Run ID
dht_sync_lag_23385213145
Create rate
The average number of created records per agent.
Mean
mean 67.12/s
Max
Highest per-partition mean rate
max 192.34/s
Min
Lowest per-partition mean rate
min 14.93/s
Sync lag timing
The average time between when a record was created and when it is first seen by an agent.
Mean
mean 1.302975933420001e+31s (SD = 2.1397546157879423e+31s)
Max
Highest per-partition mean latency
max mean 1.5963864127155133e+32s
Min
Lowest per-partition mean latency
min mean 41.987823s
Sync lag rate
The average number of created records discovered per agent.
Mean
mean 36.67/s
Max
Highest per-partition mean rate
max 510.5/s
Min
Lowest per-partition mean rate
min 0/s
Error count
The number of errors encountered during the scenario.
15
Host Metrics
User CPU usage
CPU usage by user space
31.55%
Network receive rate (primary)
Rate of bytes received on primary network interface
p5 26.62 KiB/s < mean 774.11 KiB/s (SD = 683.19 KiB/s) < p951.96 MiB/s
Network send rate (primary)
Rate of bytes sent on primary network interface
p5 1.17 KiB/s < mean 128.90 KiB/s (SD = 362.52 KiB/s) < p95453.63 KiB/s
Total bytes received
Total bytes received on primary network interface
count
67.51 GiB
mean
6.16 MiB/s
Total bytes sent
Total bytes sent on primary network interface
count
11.00 GiB
mean
1.00 MiB/s
CPU spike anomaly
CPU spike anomaly was detected
Detected

Warning CPU p99 reached 94.5%

Memory leak anomaly
Memory leak anomaly was detected
Detected

Warning Memory growing at 384.26 MB/s

Disk full anomaly
Disk full anomaly was detected
NotDetected
Swap thrashing anomaly
Swap thrashing anomaly was detected
Detected

Critical Heavy swap usage (80.7% swap used)

System overload anomaly
System overload anomaly was detected
NotDetected
Additional Host Metrics
Hidden by default. Click to toggle visibility.
CPU usage
Total CPU usage and kernel CPU usage
Total
p5 2.97% < mean 46.25% (SD = 23.2%) < p95 83.31%
System
14.70%
CPU percentiles
CPU usage percentiles
p50
48.17%
p95
83.31%
p99
94.50%
CPU usage above 80%
Number of hosts above 80% CPU and mean time spent above threshold for those hosts
count
191 hosts
mean time
0.09s
Memory used percentage
Percentage of memory used
p5 5.54% < mean 10.76% (SD = 6.28%) < p95 19.83%
Memory available percentage
Percentage of available memory
p5 80.17% < mean 89.24% (SD = 6.28%) < p95 94.46%
Max host memory used
Maximum memory usage percentage across all hosts
max 55.95%
Max host swap used percentage
Maximum swap space usage percentage across all hosts
max 80.69%
Memory growth rate
Rate of memory growth over time
growth 384.26MB/s
Disk read throughput
Disk read throughput in MB/s
0.06MB/s
Disk write throughput
Disk write throughput in MB/s
956.15MB/s
Disk space utilization risk
Number of hosts nearing disk space capacity by mount point
Mount Point /
0/221hosts
Mount Point /boot
0/1hosts
Mount Point /efi-boot
0/17hosts
Mount Point /etc/hostname
0/20hosts
Mount Point /etc/hosts
0/20hosts
Mount Point /etc/resolv.conf
0/20hosts
Mount Point /nix/store
0/21hosts
System load average
System load averages over 1, 5, and 15 minutes. This is an unnormalised value not divided by number of CPUs, so it is only meaningful if all machines have the same core count.
1 min
1.82
5 min
0.95
15 min
0.46
CPU overloaded hosts
Percentage of hosts that experienced CPU overload
0.00%
CPU pressure
CPU pressure over 10, 60, and 300 second averages
10 second average
p5 0.13% < mean 21.668% (SD = 16.6483%) < p95 48.95%
60 second average
18.9105%
300 second average
9.4982%
Memory pressure some
Memory pressure (some tasks stalled) over 10, 60, and 300 second averages
10 second average
p5 0% mean 0% (SD = 0%) p95 0%
60 second average
0.0000%
300 second average
0.0000%
Memory pressure full
Memory pressure (all tasks stalled) over 10, 60, and 300 second averages
10 second average
p5 0% mean 0% (SD = 0%) p95 0%
60 second average
0.0000%
300 second average
0.0000%
I/O pressure some
I/O pressure (some tasks stalled) over 10, 60 and 300 second averages
10 second average
p5 0.04% < mean 9.8448% (SD = 11.1837%) < p95 32.87%
60 second average
8.3632%
300 second average
4.5449%
I/O pressure full
I/O pressure (all tasks stalled) over 10, 60 and 300 second averages
10 second average
p5 0.01% < mean 7.5151% (SD = 9.7661%) < p95 26.68%
60 second average
6.3421%
300 second average
3.5322%
Holochain process CPU usage
CPU usage by Holochain process
p5 6.89% < mean 45.74% (SD = 19.86%) < p95 79.28%
Holochain process memory (PSS)
Proportional Set Size memory of Holochain process
p5 149.21 KiB < mean 267.21 KiB (SD = 56.94 KiB) < p95343.03 KiB
Holochain process threads
Number of threads in Holochain process
p5 11 threads < mean 24.36 threads (SD = 12.57 threads) < p95 45 threads
Holochain process file descriptors
Number of file descriptors used by Holochain process
p5 57 file descriptors < mean 73.1 file descriptors (SD = 16.75 file descriptors) < p95 94 file descriptors

Remote Call Rate

Test the throughput of remote_call operations. Each agent in this scenario waits for a certain number of peers to be available or for up to two minutes, whichever happens first, before starting its behaviour.

Started
Sat, 21 Mar 2026 18:09:38 UTC
Peer count
250
Peer count at end
4981
Behaviours
  • default (1 agent)
Holochain version
0.6.1-rc.3
Wind Tunnel version
0.6.0
Run ID
remote_call_rate_23385213145
Dispatch timing
The time between sending a remote call and the remote handler being invoked.
Mean
mean -2.077989s (SD = 25.277866s)
Max
Highest per-partition mean latency
max mean 70.572834s
Min
Lowest per-partition mean latency
min mean -17.798223s
Round-trip timing
The total elapsed time to get a response to the client.
Mean
mean 0.450675s (SD = 0.912238s)
Max
Highest per-partition mean latency
max mean 6.627484s
Min
Lowest per-partition mean latency
min mean 0.019211s
Host Metrics
User CPU usage
CPU usage by user space
6.58%
Network receive rate (primary)
Rate of bytes received on primary network interface
p5 6.05 KiB/s < mean 400.68 KiB/s (SD = 531.00 KiB/s) < p951.51 MiB/s
Network send rate (primary)
Rate of bytes sent on primary network interface
p5 1.55 KiB/s < mean 32.73 KiB/s (SD = 39.23 KiB/s) < p9585.14 KiB/s
Total bytes received
Total bytes received on primary network interface
count
39.73 GiB
mean
3.96 MiB/s
Total bytes sent
Total bytes sent on primary network interface
count
3.16 GiB
mean
322.77 KiB/s
CPU spike anomaly
CPU spike anomaly was detected
NotDetected
Memory leak anomaly
Memory leak anomaly was detected
Detected

Warning Memory growing at 302.73 MB/s

Disk full anomaly
Disk full anomaly was detected
NotDetected
Swap thrashing anomaly
Swap thrashing anomaly was detected
Detected

Critical Heavy swap usage (80.7% swap used)

System overload anomaly
System overload anomaly was detected
NotDetected
Additional Host Metrics
Hidden by default. Click to toggle visibility.
CPU usage
Total CPU usage and kernel CPU usage
Total
p5 1.72% < mean 9.27% (SD = 15.89%) < p95 47.45%
System
2.69%
CPU percentiles
CPU usage percentiles
p50
4.76%
p95
47.45%
p99
87.95%
CPU usage above 80%
Number of hosts above 80% CPU and mean time spent above threshold for those hosts
count
101 hosts
mean time
0.03s
Memory used percentage
Percentage of memory used
p5 5.38% < mean 10.24% (SD = 8.28%) < p95 25.82%
Memory available percentage
Percentage of available memory
p5 74.18% < mean 89.76% (SD = 8.28%) < p95 94.62%
Max host memory used
Maximum memory usage percentage across all hosts
max 78.37%
Max host swap used percentage
Maximum swap space usage percentage across all hosts
max 80.70%
Memory growth rate
Rate of memory growth over time
growth 302.73MB/s
Disk read throughput
Disk read throughput in MB/s
0.12MB/s
Disk write throughput
Disk write throughput in MB/s
137.02MB/s
Disk space utilization risk
Number of hosts nearing disk space capacity by mount point
Mount Point /
0/208hosts
Mount Point /boot
0/1hosts
Mount Point /efi-boot
0/19hosts
Mount Point /etc/hostname
0/21hosts
Mount Point /etc/hosts
0/21hosts
Mount Point /etc/resolv.conf
0/21hosts
Mount Point /nix/store
0/22hosts
System load average
System load averages over 1, 5, and 15 minutes. This is an unnormalised value not divided by number of CPUs, so it is only meaningful if all machines have the same core count.
1 min
0.25
5 min
0.14
15 min
0.07
CPU overloaded hosts
Percentage of hosts that experienced CPU overload
0.00%
CPU pressure
CPU pressure over 10, 60, and 300 second averages
10 second average
p5 0% < mean 1.3269% (SD = 4.2809%) < p95 3.59%
60 second average
1.1220%
300 second average
0.5742%
Memory pressure some
Memory pressure (some tasks stalled) over 10, 60, and 300 second averages
10 second average
p5 0% mean 0% (SD = 0%) p95 0%
60 second average
0.0000%
300 second average
0.0000%
Memory pressure full
Memory pressure (all tasks stalled) over 10, 60, and 300 second averages
10 second average
p5 0% mean 0% (SD = 0%) p95 0%
60 second average
0.0000%
300 second average
0.0000%
I/O pressure some
I/O pressure (some tasks stalled) over 10, 60 and 300 second averages
10 second average
p5 0% < mean 0.6666% (SD = 1.9917%) < p95 3.74%
60 second average
0.6715%
300 second average
0.5431%
I/O pressure full
I/O pressure (all tasks stalled) over 10, 60 and 300 second averages
10 second average
p5 0% < mean 0.6181% (SD = 1.8937%) < p95 3.5%
60 second average
0.6250%
300 second average
0.5127%
Holochain process CPU usage
CPU usage by Holochain process
p5 0.55% < mean 6.62% (SD = 14.08%) < p95 21.93%
Holochain process memory (PSS)
Proportional Set Size memory of Holochain process
p5 195.66 KiB < mean 222.84 KiB (SD = 28.46 KiB) < p95251.84 KiB
Holochain process threads
Number of threads in Holochain process
p5 10 threads < mean 23.59 threads (SD = 15.45 threads) < p95 48 threads
Holochain process file descriptors
Number of file descriptors used by Holochain process
p5 43 file descriptors < mean 72.75 file descriptors (SD = 18.91 file descriptors) < p95 98 file descriptors

Remote Signals

This scenario tests the throughput of remote_signals operations.

Started
Sat, 21 Mar 2026 18:30:04 UTC
Peer count
250
Peer count at end
6723
Behaviours
  • default (1 agent)
Holochain version
0.6.1-rc.3
Wind Tunnel version
0.6.0
Run ID
remote_signals_23385213145
Round trip time
The time from origin signal dispatch to origin receive of the remote side's response signal.
mean 0.45523s (SD = 2.440709s) p50 0.231606s p95 0.930044s p99 4.913639s
Timeouts
The number of timeouts waiting for the remote side's response signal (default timeout is 20 seconds).
map[count:260 mean_rate_per_second:23110.3 measurement_duration:map[nanos:584108699 secs:11883] p5_rate_per_second:0 p95_rate_per_second:2698.85 peak_rate_per_second:6.713780919e+07 rate_trend:map[trend:[0 0.17 16.34 12.3 30.61 115.93 114.38 67.66 69.69 206.05 97.35 1558.89 147.91 215.96 563.63 2125.3 165.35 193.58 2448.65 528.81 528.06 545.8 241.93 243.93 230.89 195.53 266.31 350.63 9072.72 367285.99 1203.19 26400.39 1981.75 3112.13 875.03 300.81 187.33 199.12 229.22 186.4 67.38 63.52 226.2 269.06 539.99 131.26 257.25 743.63 456.34 463.44 1538.95 305.5 1275.56 105.7 137.86 195.03 366.19 3200.49 547.06 37.75 42.37 52.23 58.01 57.48 66.55 0.98 0.92 0.97 0.98 0.99 0.99 0.99 1 1 0.99 0.99 0.99 0.99 1 1 1 0.98 0.99 1 0.99 0.99 0.99 0.99 1 0.99 1 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.97 0.98 0.97 0.97 0.97 0.97 0.97 0.97 0.98 0.97 0.97 0.98 0.97 0.97 0.97 0.97 0.97 0.97 0.97 0.97 0.97 0.97 0.97 0.98 0.98] window_duration:10s] std_rate_per_second:1.1907111e+06]
Host Metrics
User CPU usage
CPU usage by user space
7.26%
Network receive rate (primary)
Rate of bytes received on primary network interface
p5 16.15 KiB/s < mean 474.97 KiB/s (SD = 571.98 KiB/s) < p951.64 MiB/s
Network send rate (primary)
Rate of bytes sent on primary network interface
p5 4.45 KiB/s < mean 42.51 KiB/s (SD = 46.75 KiB/s) < p95102.95 KiB/s
Total bytes received
Total bytes received on primary network interface
count
40.55 GiB
mean
3.49 MiB/s
Total bytes sent
Total bytes sent on primary network interface
count
3.76 GiB
mean
331.47 KiB/s
CPU spike anomaly
CPU spike anomaly was detected
NotDetected
Memory leak anomaly
Memory leak anomaly was detected
Detected

Warning Memory growing at 326.38 MB/s

Disk full anomaly
Disk full anomaly was detected
NotDetected
Swap thrashing anomaly
Swap thrashing anomaly was detected
Detected

Critical Heavy swap usage (80.7% swap used)

System overload anomaly
System overload anomaly was detected
NotDetected
Additional Host Metrics
Hidden by default. Click to toggle visibility.
CPU usage
Total CPU usage and kernel CPU usage
Total
p5 2.03% < mean 10.37% (SD = 15.87%) < p95 48.08%
System
3.11%
CPU percentiles
CPU usage percentiles
p50
6.04%
p95
48.08%
p99
89.11%
CPU usage above 80%
Number of hosts above 80% CPU and mean time spent above threshold for those hosts
count
114 hosts
mean time
0.03s
Memory used percentage
Percentage of memory used
p5 5.46% < mean 10.85% (SD = 8.41%) < p95 26.52%
Memory available percentage
Percentage of available memory
p5 73.48% < mean 89.15% (SD = 8.41%) < p95 94.54%
Max host memory used
Maximum memory usage percentage across all hosts
max 78.28%
Max host swap used percentage
Maximum swap space usage percentage across all hosts
max 80.70%
Memory growth rate
Rate of memory growth over time
growth 326.38MB/s
Disk read throughput
Disk read throughput in MB/s
0.37MB/s
Disk write throughput
Disk write throughput in MB/s
177.17MB/s
Disk space utilization risk
Number of hosts nearing disk space capacity by mount point
Mount Point /
0/209hosts
Mount Point /boot
0/1hosts
Mount Point /efi-boot
0/19hosts
Mount Point /etc/hostname
0/29hosts
Mount Point /etc/hosts
0/29hosts
Mount Point /etc/resolv.conf
0/29hosts
Mount Point /nix/store
0/24hosts
System load average
System load averages over 1, 5, and 15 minutes. This is an unnormalised value not divided by number of CPUs, so it is only meaningful if all machines have the same core count.
1 min
0.28
5 min
0.16
15 min
0.09
CPU overloaded hosts
Percentage of hosts that experienced CPU overload
0.00%
CPU pressure
CPU pressure over 10, 60, and 300 second averages
10 second average
p5 0% < mean 1.606% (SD = 4.1071%) < p95 4.17%
60 second average
1.3512%
300 second average
0.6827%
Memory pressure some
Memory pressure (some tasks stalled) over 10, 60, and 300 second averages
10 second average
p5 0% mean 0% (SD = 0%) p95 0%
60 second average
0.0000%
300 second average
0.0000%
Memory pressure full
Memory pressure (all tasks stalled) over 10, 60, and 300 second averages
10 second average
p5 0% mean 0% (SD = 0%) p95 0%
60 second average
0.0000%
300 second average
0.0000%
I/O pressure some
I/O pressure (some tasks stalled) over 10, 60 and 300 second averages
10 second average
p5 0% < mean 0.7552% (SD = 2.4201%) < p95 4.38%
60 second average
0.7257%
300 second average
0.7256%
I/O pressure full
I/O pressure (all tasks stalled) over 10, 60 and 300 second averages
10 second average
p5 0% < mean 0.6978% (SD = 2.2843%) < p95 4.17%
60 second average
0.6739%
300 second average
0.6891%
Holochain process CPU usage
CPU usage by Holochain process
p5 0.66% < mean 7.41% (SD = 13.8%) < p95 21.74%
Holochain process memory (PSS)
Proportional Set Size memory of Holochain process
p5 197.72 KiB < mean 224.89 KiB (SD = 29.01 KiB) < p95253.34 KiB
Holochain process threads
Number of threads in Holochain process
p5 10 threads < mean 23.2 threads (SD = 12.3 threads) < p95 45 threads
Holochain process file descriptors
Number of file descriptors used by Holochain process
p5 45 file descriptors < mean 75.05 file descriptors (SD = 20.09 file descriptors) < p95 102 file descriptors

Write/get_agent_activity

A scenario where write peers write entries, while get_agent_activity peers each query a single write agent's activity with get_agent_activity.

Before a target write peer and the requesting get_agent_activity peer are in sync, this will measure the get_agent_activity call performance over a network. Once a write peer reaches sync with a get_agent_activity peer, the write peer will publish their actions and entries, and so the get_agent_activity calls will likely have most of the data they need locally. At that point this measures the database query performance and code paths through host functions.

Started
Sat, 21 Mar 2026 19:03:28 UTC
Peer count
250
Peer count at end
5722
Behaviours
  • get_agent_activity (1 agent)
  • write (1 agent)
Holochain version
0.6.1-rc.3
Wind Tunnel version
0.6.0
Run ID
write_get_agent_activity_23385213145
Observed chain advancement
The maximum highest-observed action sequence per write agent, aggregated across all reading agents. The reading dimension is collapsed by taking the maximum observed value for each write agent (any reader successfully tracking that point counts as propagation). Captures how far readers tracked writers’ chains during the run.
Mean max (across write agents)
Mean of the highest chain head value observed per write agent. Each write agent contributes its maximum — the value seen by any reader.
3966.23
Max
Highest chain head value observed for any single write agent.
12500
Write agents observed
30
get_agent_activity_full zome call timing
The time taken to call the zome function that retrieves information on a write peer's source chain.
Mean
mean 0.398512s (SD = 0.601048s)
Max
Highest per-partition mean latency
max mean 1.211566s
Min
Lowest per-partition mean latency
min mean 0.116466s
Error count
The number of errors accumulated.
379
Host Metrics
User CPU usage
CPU usage by user space
36.46%
Network receive rate (primary)
Rate of bytes received on primary network interface
p5 117.33 KiB/s < mean 924.55 KiB/s (SD = 663.81 KiB/s) < p951.99 MiB/s
Network send rate (primary)
Rate of bytes sent on primary network interface
p5 4.96 KiB/s < mean 147.50 KiB/s (SD = 278.26 KiB/s) < p95430.50 KiB/s
Total bytes received
Total bytes received on primary network interface
count
198.00 GiB
mean
20.69 MiB/s
Total bytes sent
Total bytes sent on primary network interface
count
31.45 GiB
mean
3.29 MiB/s
CPU spike anomaly
CPU spike anomaly was detected
Detected

Warning CPU p99 reached 94.8%

Memory leak anomaly
Memory leak anomaly was detected
Detected

Warning Memory growing at 457.31 MB/s

Disk full anomaly
Disk full anomaly was detected
NotDetected
Swap thrashing anomaly
Swap thrashing anomaly was detected
Detected

Critical Heavy swap usage (80.7% swap used)

System overload anomaly
System overload anomaly was detected
Detected

Warning 28% of hosts overloaded (load5/ncpus > 1.0)

Additional Host Metrics
Hidden by default. Click to toggle visibility.
CPU usage
Total CPU usage and kernel CPU usage
Total
p5 5.73% < mean 51.22% (SD = 28.75%) < p95 91.09%
System
14.76%
CPU percentiles
CPU usage percentiles
p50
54.47%
p95
91.09%
p99
94.82%
CPU usage above 80%
Number of hosts above 80% CPU and mean time spent above threshold for those hosts
count
194 hosts
mean time
0.26s
Memory used percentage
Percentage of memory used
p5 7.25% < mean 12% (SD = 6.67%) < p95 21.9%
Memory available percentage
Percentage of available memory
p5 78.1% < mean 88% (SD = 6.67%) < p95 92.75%
Max host memory used
Maximum memory usage percentage across all hosts
max 57.32%
Max host swap used percentage
Maximum swap space usage percentage across all hosts
max 80.67%
Memory growth rate
Rate of memory growth over time
growth 457.31MB/s
Disk read throughput
Disk read throughput in MB/s
0.03MB/s
Disk write throughput
Disk write throughput in MB/s
553.87MB/s
Disk space utilization risk
Number of hosts nearing disk space capacity by mount point
Mount Point /
0/215hosts
Mount Point /efi-boot
0/18hosts
Mount Point /etc/hostname
0/21hosts
Mount Point /etc/hosts
0/21hosts
Mount Point /etc/resolv.conf
0/21hosts
Mount Point /nix/store
0/23hosts
System load average
System load averages over 1, 5, and 15 minutes. This is an unnormalised value not divided by number of CPUs, so it is only meaningful if all machines have the same core count.
1 min
2.41
5 min
1.76
15 min
1.10
CPU overloaded hosts
Percentage of hosts that experienced CPU overload
27.85%
CPU pressure
CPU pressure over 10, 60, and 300 second averages
10 second average
p5 0.08% < mean 29.8724% (SD = 29.2397%) < p95 82.44%
60 second average
28.1337%
300 second average
20.1135%
Memory pressure some
Memory pressure (some tasks stalled) over 10, 60, and 300 second averages
10 second average
p5 0% mean 0% (SD = 0%) p95 0%
60 second average
0.0000%
300 second average
0.0000%
Memory pressure full
Memory pressure (all tasks stalled) over 10, 60, and 300 second averages
10 second average
p5 0% mean 0% (SD = 0%) p95 0%
60 second average
0.0000%
300 second average
0.0000%
I/O pressure some
I/O pressure (some tasks stalled) over 10, 60 and 300 second averages
10 second average
p5 0% < mean 7.1489% (SD = 9.9585%) < p95 32.15%
60 second average
6.8004%
300 second average
5.0940%
I/O pressure full
I/O pressure (all tasks stalled) over 10, 60 and 300 second averages
10 second average
p5 0% < mean 4.5849% (SD = 8.3563%) < p95 26.67%
60 second average
4.3738%
300 second average
3.3765%
Holochain process CPU usage
CPU usage by Holochain process
p5 6.75% < mean 48.62% (SD = 27.36%) < p95 86.65%
Holochain process memory (PSS)
Proportional Set Size memory of Holochain process
p5 188.59 KiB < mean 300.75 KiB (SD = 74.36 KiB) < p95398.12 KiB
Holochain process threads
Number of threads in Holochain process
p5 11 threads < mean 27.19 threads (SD = 11.4 threads) < p95 43 threads
Holochain process file descriptors
Number of file descriptors used by Holochain process
p5 53 file descriptors < mean 73.52 file descriptors (SD = 16.68 file descriptors) < p95 92 file descriptors

Write/get_agent_activity with volatile conductors

A scenario where write peers write entries, while get_agent_activity_volatile peers each query a single write agents activity with get_agent_activity but shutdown and restart their conductors at semi-random intervals.

Before a target write peer and the requesting get_agent_activity_volatile peer are in sync, this will measure the get_agent_activity call performance over a network. Once a write peer reaches sync with a get_agent_activity peer, the write peer will publish their actions and entries, and so the get_agent_activity calls will likely have most of the data they need locally. At that point this measures the database query performance and code paths through host functions.

Started
Sat, 21 Mar 2026 19:25:01 UTC
Peer count
250
Peer count at end
5473
Behaviours
  • get_agent_activity_volatile (1 agent)
  • write (1 agent)
Holochain version
0.6.1-rc.3
Wind Tunnel version
0.6.0
Run ID
write_get_agent_activity_volatile_23385213145
Highest observed action_seq
The rate at which zero-arc readers observe new chain heads on the writers' chains via get_agent_activity. This reflects the DHT's ability to propagate agent activity ops and make them available to querying peers.
Mean
mean 243.27/s
Max
Highest per-partition mean rate
max 2089.67/s
Min
Lowest per-partition mean rate
min 0/s
get_agent_activity_full zome call timing
The time taken to call the zome function that retrieves information on a write peer's source chain.
Mean
mean 0.302773s (SD = 0.612358s)
Max
Highest per-partition mean latency
max mean 9.807228s
Min
Lowest per-partition mean latency
min mean 0.003808s
Running volatile conductors count
The number of conductors running by get_agent_activity_volatile peers.
p5 82 conductors < mean 145.17 conductors (SD = 28.37 conductors) < p95 164 conductors
Volatile conductor total running duration
The total running duration of a get_agent_activity_volatile conductor.
Mean
mean 378.444706s (SD = 312.299173s)
Max
Highest per-partition mean latency
max mean 809.857143s
Min
Lowest per-partition mean latency
min mean 195.2s
Volatile conductor running duration
The duration that a get_agent_activity_volatile conductor was running before being stopped.
Mean
mean 84.796499s (SD = 48.985077s)
Max
Highest per-partition mean latency
max mean 145.765861s
Min
Lowest per-partition mean latency
min mean 20.469875s
Volatile conductor stopped duration
The duration that a get_agent_activity_volatile conductor was stopped before being started again.
Mean
mean 78.564762s (SD = 58.152591s)
Max
Highest per-partition mean latency
max mean 130.635655s
Min
Lowest per-partition mean latency
min mean 9.705911s
Reached target arc
Did get_agent_activity_volatile peer reach their target arc in the moment before they were shutdown.
  • get_agent_activity_volatile_agent: uhCAk-5DU961T5a3IKCV6ZB_eck4bc8FT1etD6wBMr81m18v30J75
p5 0 < mean 0.5 (SD = 0.5) < p95 1
  • get_agent_activity_volatile_agent: uhCAk-NeQan1BfeH0RmmDcMWxAHPH9wbGq-i5xQSu6xd2aho0X6oS
p5 0 < mean 0.3 (SD = 0.46) < p95 1
  • get_agent_activity_volatile_agent: uhCAk-Z-EISDFHXsIs3_UqXixKpKmzdZ30w7CYJI6tfF_-4t4FMg_
p5 0 < mean 0.38 (SD = 0.48) < p95 1
  • get_agent_activity_volatile_agent: uhCAk-eWR2MeywmZ40f2a9f63mk0WmhMXOLO3kalKUptpnDxRyVLr
p5 0 < mean 0.33 (SD = 0.47) < p95 1
  • get_agent_activity_volatile_agent: uhCAk12gPXg-Jqtfn6HkbCSFNfEVgTGRV-Fc3bKpXwJBLPgW4SC7m
p5 0 < mean 0.5 (SD = 0.5) < p95 1
  • get_agent_activity_volatile_agent: uhCAk1EW8EkzbNWocZjHaepFWZpJWdwEV61nvFy7bnD-qrikGJH2g
p5 0 < mean 0.29 (SD = 0.45) < p95 1
  • get_agent_activity_volatile_agent: uhCAk1Uv5G8HRHgc0mFRqJsvt2-XPUZqSwv3aG0U-ued4jbaMkB-x
p5 0 < mean 0.38 (SD = 0.48) < p95 1
  • get_agent_activity_volatile_agent: uhCAk1hSMoMMMnouYl8kXEA7sUoGTqoTwcf6hj5jGP1OSGK9bWHrm
p5 0 < mean 0.5 (SD = 0.5) < p95 1
  • get_agent_activity_volatile_agent: uhCAk2VshZ1qs4iIEyvcwJQlCAASCaYJUvvUolS8MdH7yPqPsJ_-R
p5 0 < mean 0.44 (SD = 0.5) < p95 1
  • get_agent_activity_volatile_agent: uhCAk2X4e1iW-ITghtUe8rqP8OLWRmsllJdLHlpoTn8GeAy-oVDJh
p5 0 < mean 0.29 (SD = 0.45) < p95 1
  • get_agent_activity_volatile_agent: uhCAk2hYfI1N5r6ijJJjYriSEYooCJPFFFPlz0eNZ9qByoSpZNDed
p5 0 < mean 0.11 (SD = 0.31) < p95 1
  • get_agent_activity_volatile_agent: uhCAk2w0FUuyh6gFgNFvsORwR3dOnwOaVmubcQTTVVETIsvowSL06
p5 0 < mean 0.5 (SD = 0.5) < p95 1
  • get_agent_activity_volatile_agent: uhCAk3gHef4YsAIpQEInNMcQgmyemS1dkIBoqirdC9tPhDVMINk3o
p5 0 < mean 0.4 (SD = 0.49) < p95 1
  • get_agent_activity_volatile_agent: uhCAk3gVpSeJKLllYYh4Uvt9PG5bc-GJQUCulVfl-P0wjhYL92gRv
p5 0 < mean 0.67 (SD = 0.47) < p95 1
  • get_agent_activity_volatile_agent: uhCAk3ss8Ix9udVj8OzwVM719VC_TAtgO8JlsN_jKbvTwZqS6Xfoh
p5 0 < mean 0.29 (SD = 0.45) < p95 1
  • get_agent_activity_volatile_agent: uhCAk4Bd675qqVzc-WXgd7lHAr1wzwckktx9tg0VRzffu6o1vi07t
p5 0 < mean 0.5 (SD = 0.5) < p95 1
  • get_agent_activity_volatile_agent: uhCAk4keCoP1pXyKlfyNAYdXxHxQBMoeLv_ow1oqLyMqZWtGS27OD
p5 0 < mean 0.43 (SD = 0.49) < p95 1
  • get_agent_activity_volatile_agent: uhCAk5PpOWKx1pd-uO3gHR4x8oX0e6RiKfcYxvlz2xiakIuzzVAK2
p5 0 < mean 0.5 (SD = 0.5) < p95 1
  • get_agent_activity_volatile_agent: uhCAk5iZ9gDQH5S2ZvZv-L0BuK58O6DM1eG61CsK8OClbAHvmBA81
p5 0 < mean 0.4 (SD = 0.49) < p95 1
  • get_agent_activity_volatile_agent: uhCAk62UCwRq7pph_TB_NcZ8oncN8csWdLHoa7hFyl11Cv-c8JRKJ
p5 0 < mean 0.15 (SD = 0.36) < p95 1
  • get_agent_activity_volatile_agent: uhCAk6HMCyzHdq_Igfva3xPNJLM5mf7KW-BS2kC7cMghCRgCKGZ9T
p5 0 < mean 0.22 (SD = 0.42) < p95 1
  • get_agent_activity_volatile_agent: uhCAk74eopRXfsu56VwCn23e3U3a0e6zRUCe08mXv39_9t8FHyE0i
p5 0 < mean 0.25 (SD = 0.43) < p95 1
  • get_agent_activity_volatile_agent: uhCAk77DnnbzlNCbT7XJ9KL_LTMh_hspyqAHI_wyDbln1vfZ0S5Tg
p5 0 < mean 0.5 (SD = 0.5) < p95 1
  • get_agent_activity_volatile_agent: uhCAk7LavHYYaLVLllZsebSu0YsL3R8m-8928mvAoVa095zwvKD94
p5 0 < mean 0.25 (SD = 0.43) < p95 1
  • get_agent_activity_volatile_agent: uhCAk7f6Ndmwpi66y4nES7iyt_kU4u7MLNTVe0Ms8g7VZ6n0c5tfF
p5 0 < mean 0.14 (SD = 0.35) < p95 1
  • get_agent_activity_volatile_agent: uhCAk7pqtB73HGPAHgqur7Z1Pl302IxU5BRDAweHcy23_90IhoIZj
p5 0 < mean 0.67 (SD = 0.47) < p95 1
  • get_agent_activity_volatile_agent: uhCAk7xPt9WvgEo1c-EKAvYkA5WYasPM5jF0b6zzK7FShsq24mgd3
p5 0 < mean 0.27 (SD = 0.45) < p95 1
  • get_agent_activity_volatile_agent: uhCAk85y0j-_L4dAYJaEL895Hot0_fKCc7CFmvqcXRizEXQRj3dhI
p5 0 < mean 0.33 (SD = 0.47) < p95 1
  • get_agent_activity_volatile_agent: uhCAk8Kz1uPCe0rIiBTc3Dr6hpzMNMKTA1KldIeHnvi7kFyx7kkON
p5 0 < mean 0.83 (SD = 0.37) < p95 1
  • get_agent_activity_volatile_agent: uhCAk8ygmdt-bMzud5rqdadWKtDH9qGBHd4ghQTo6QOJKuMRyjo5G
p5 0 < mean 0.09 (SD = 0.29) < p95 1
  • get_agent_activity_volatile_agent: uhCAk9jRQnYWosyeq6dJ3iBu2IrIAViaMMsv_opyyvAdb-K2R8n94
p5 0 < mean 0.43 (SD = 0.49) < p95 1
  • get_agent_activity_volatile_agent: uhCAk9ySOGFHpAYRspILM3NqnhRUQ8EUScu387fsfmlTkOpeS4rNT
p5 0 < mean 0.43 (SD = 0.49) < p95 1
  • get_agent_activity_volatile_agent: uhCAkA1_SKm7ecPdSyi4Zqb0P_9T-0EHibUEPB0qiRujl0bSPU63O
p5 0 < mean 0.1 (SD = 0.3) < p95 1
  • get_agent_activity_volatile_agent: uhCAkAWWMzjEaiZB6CCehNzgH6N2K17kXNZMuH7xVQgyOj2-zopg_
p5 0 < mean 0.56 (SD = 0.5) < p95 1
  • get_agent_activity_volatile_agent: uhCAkAeRdlmgBIKRMhSJq1nOVsCAoubhI0UXLzblYC39KqVCM9pYr
p5 0 < mean 0.63 (SD = 0.48) < p95 1
  • get_agent_activity_volatile_agent: uhCAkB27JGAMFke5HBvThVvk8JgSoVajpu_kE3_yfxrcodzTdDqA1
p5 0 < mean 0.43 (SD = 0.49) < p95 1
  • get_agent_activity_volatile_agent: uhCAkBnRvEYenPAVpUegMrlZc3e3hVVa-RvV72ViLrQnzi-IljAIY
p5 0 < mean 0.25 (SD = 0.43) < p95 1
  • get_agent_activity_volatile_agent: uhCAkCNWS0-QUFU4czlBXlAmMkRNMKXKqvdElg_BR0CL1t9a4UXPc
p5 0 < mean 0.08 (SD = 0.28) < p95 1
  • get_agent_activity_volatile_agent: uhCAkCz6sycb3IFchzotYXZrXJs1rHOwJNudm7X8HhclCd44FX7Ud
p5 0 < mean 0.38 (SD = 0.48) < p95 1
  • get_agent_activity_volatile_agent: uhCAkDDH3RnQKUDkCe9eB3kgy5Mi93RwQwXPZZuB7ugKQA7d-D39V
p5 0 < mean 0.57 (SD = 0.49) < p95 1
  • get_agent_activity_volatile_agent: uhCAkDDIhZaCwXp1yMQ6ghB4747q6TUwuiy-8qNTuYYsy169dkBnp
p5 0 mean 0 (SD = 0) p95 0
  • get_agent_activity_volatile_agent: uhCAkD_YfzuPPHUpVMoRuPl4Xf8u62FsTyHivoVfwoWXQDXLMB65P
p5 0 < mean 0.44 (SD = 0.5) < p95 1
  • get_agent_activity_volatile_agent: uhCAkDg2nbGJ7Qjqk6KUi_EnSyjnNSXmGaM5YnyzuxB-nuNhLx2SS
p5 0 < mean 0.33 (SD = 0.47) < p95 1
  • get_agent_activity_volatile_agent: uhCAkDgnQ_NRgMXp3D8rYK5kEl8ZxiTcumaL9nrZOrEh-HqVMlvJx
p5 0 < mean 0.57 (SD = 0.49) < p95 1
  • get_agent_activity_volatile_agent: uhCAkDjvqPtWBmY9ueTR92zz_xlB0_tfDvGewgKGvxquuLZ_BKQmE
p5 0 < mean 0.33 (SD = 0.47) < p95 1
  • get_agent_activity_volatile_agent: uhCAkE3z6TQDQknmNffQv6Gbej1EOH-rrd6RYaD63U0cCZhqPed-L
p5 0 < mean 0.2 (SD = 0.4) < p95 1
  • get_agent_activity_volatile_agent: uhCAkEG7YLoTkg88-vasQESIgQ7yOxxc6pbhvvyFHXAC0gW-7UTv0
p5 0 < mean 0.15 (SD = 0.36) < p95 1
  • get_agent_activity_volatile_agent: uhCAkE_erAqQXPktmVgrQE1siR1Eg8xgpeTAiXtmpaLLGj5NXBQrQ
p5 0 < mean 0.38 (SD = 0.48) < p95 1
  • get_agent_activity_volatile_agent: uhCAkEdsNgTkklum2J_Dnr5YsU5Eqbf651cTR3pdQV6xqcoWGZI8Y
p5 0 < mean 0.45 (SD = 0.5) < p95 1
  • get_agent_activity_volatile_agent: uhCAkElShEx9h8ou4erF88CbMlXsHttncQl9HBbxSF7AT0MMircRg
p5 0 < mean 0.4 (SD = 0.49) < p95 1
  • get_agent_activity_volatile_agent: uhCAkF2V731ReVypeVv3sjezJBfumesDJ3IOfTujb4GrQvW6pj1OR
p5 0 < mean 0.63 (SD = 0.48) < p95 1
  • get_agent_activity_volatile_agent: uhCAkFXPZPTBZn9IG9pIoq6_fFC1W5xWlsa2BGOH1Rr-9wMyTogG_
p5 0 < mean 0.25 (SD = 0.43) < p95 1
  • get_agent_activity_volatile_agent: uhCAkG175pBXf5HFjss1VkQd_tT7Vw_2JuXPb-J00NcVUix3IMqTm
p5 0 < mean 0.45 (SD = 0.5) < p95 1
  • get_agent_activity_volatile_agent: uhCAkG4YLWqrZxVbFAnjI3Aw3PeYhB7ffqEZmQ5l7CkilnEEdlyEe
p5 0 < mean 0.2 (SD = 0.4) < p95 1
  • get_agent_activity_volatile_agent: uhCAkGCw860sNucicgHsbYH26PzrWHDbQQl-624K1e9D8ndndBt67
p5 0 < mean 0.44 (SD = 0.5) < p95 1
  • get_agent_activity_volatile_agent: uhCAkGHXw1ujZB-3Oc1ou_Sb4qZP5YxfuFkU7P-AeOgq8WAaV1wD4
p5 0 < mean 0.22 (SD = 0.42) < p95 1
  • get_agent_activity_volatile_agent: uhCAkGiZQ9BWetBZwiWTRsD-Bf_yPoEQQDb7397c1EVcpe-CwqNQ2
p5 0 < mean 0.5 (SD = 0.5) < p95 1
  • get_agent_activity_volatile_agent: uhCAkH-06xVkipTIX9Q5-dtZF9vGPkSWGjEXn5yEEUjysAZ8ueuAu
p5 0 < mean 0.44 (SD = 0.5) < p95 1
  • get_agent_activity_volatile_agent: uhCAkHVVPKDpfBgQoicBVjLvdAOOv5SDIRlzL8uP84QaglD4pWTVl
p5 0 < mean 0.08 (SD = 0.27) < p95 1
  • get_agent_activity_volatile_agent: uhCAkHZRYV5fWcBzsRERHJzz6oWbXS8DrGy-56TV-jzJyaC96OtoK
p5 0 mean 0 (SD = 0) p95 0
  • get_agent_activity_volatile_agent: uhCAkHfdcO60bTAvRjqYF5An_djNUVuiZqdzJxKcgypzP9W4h72mA
p5 0 < mean 0.43 (SD = 0.49) < p95 1
  • get_agent_activity_volatile_agent: uhCAkHpVYFE7kfvhwpzfTUdMfGDGnS8Ya3fNCRuHGGQOZK7B6ZZGk
p5 0 < mean 0.44 (SD = 0.5) < p95 1
  • get_agent_activity_volatile_agent: uhCAkHv5L4RIU4QWvVXqYGmI8_lOhOlGlI9YPuM15E9ppm0GjkpQn
p5 0 < mean 0.44 (SD = 0.5) < p95 1
  • get_agent_activity_volatile_agent: uhCAkI207mlVhmMWzMKLzt8Mq-wkK_oJ8L9Q9slBmDnx1aIBl34k5
p5 0 < mean 0.67 (SD = 0.47) < p95 1
  • get_agent_activity_volatile_agent: uhCAkIkdDkLuhqhm-sP1n-Sz5rqOoSEbhB2In49flzRfoaHVn5REa
p5 0 < mean 0.29 (SD = 0.45) < p95 1
  • get_agent_activity_volatile_agent: uhCAkIs4enegZbPWSno4gbrtFHAfh0spw6P5XOX2HHJfsqOoYy1vR
p5 0 < mean 0.43 (SD = 0.49) < p95 1
  • get_agent_activity_volatile_agent: uhCAkKCwQ1mxtZeAwVHnngvrQBz9HQ57xPD3uTiD5haCVEt8GR5CF
p5 0 < mean 0.2 (SD = 0.4) < p95 1
  • get_agent_activity_volatile_agent: uhCAkMyKw_-59fPBTYpgyEojbb3GetNc1oalIwCbVI-5fCNOQs2xu
p5 0 < mean 0.38 (SD = 0.48) < p95 1
  • get_agent_activity_volatile_agent: uhCAkN585x3Vfw0ipJEb7DvuAoeeLUmtsozc35-gBYzJBH-Xwqwfg
p5 0 < mean 0.33 (SD = 0.47) < p95 1
  • get_agent_activity_volatile_agent: uhCAkNYuu6_rqEQ-W461ijLRfMotF6qL2DC3s1uIgDX321go6V9Li
p5 0 < mean 0.33 (SD = 0.47) < p95 1
  • get_agent_activity_volatile_agent: uhCAkNf5VqHuLbVhYsV68F6c8O_124b5RBlPohLcYB7zyFRjktCk6
p5 0 < mean 0.5 (SD = 0.5) < p95 1
  • get_agent_activity_volatile_agent: uhCAkO-v0quk8Fczpix4xf4bkS7ywqF6KN-RlhdifSJuB1MhnRSN4
p5 0 < mean 0.5 (SD = 0.5) < p95 1
  • get_agent_activity_volatile_agent: uhCAkOAgCfHyDOkWZTlG7jG2EyjCBTf0ogcMZXS3Im36DLgL-jFpg
p5 0 < mean 0.57 (SD = 0.49) < p95 1
  • get_agent_activity_volatile_agent: uhCAkPE2zhWILPuo8qhyyhgzZ5bPXh0-9WCskX5IWPz6Szxq164V9
p5 0 < mean 0.22 (SD = 0.42) < p95 1
  • get_agent_activity_volatile_agent: uhCAkPHfIqZqNIBIRmbHSanePao6pJD3J2Edx2JuyV-lnpyS7EIzG
p5 0 < mean 0.57 (SD = 0.49) < p95 1
  • get_agent_activity_volatile_agent: uhCAkPkcarMjkDGxKZkrPIuhixC3sZP4waoL0NuIe2OL0Db-KCPD1
p5 0 < mean 0.63 (SD = 0.48) < p95 1
  • get_agent_activity_volatile_agent: uhCAkQC7TzjVuv1V5Hhht4UAQSOvAG1OzncYb8OU6wVgmq-D5UXZM
p5 0 < mean 0.25 (SD = 0.43) < p95 1
  • get_agent_activity_volatile_agent: uhCAkQdc-LPHP_CigbrYNENslsOkQZmB-6j6j-AJsVQIMZsCn7lAQ
p5 0 < mean 0.5 (SD = 0.5) < p95 1
  • get_agent_activity_volatile_agent: uhCAkRNDAwN2h_wYq_LWX0A0D7RettLG0NkVwBPDYT4-C4xFsVDKU
p5 0 < mean 0.4 (SD = 0.49) < p95 1
  • get_agent_activity_volatile_agent: uhCAkROrHzFFmVDF2__ZBqmMx9UUqvtFqdlVDRvUfjDg6o19NqdWB
p5 0 < mean 0.15 (SD = 0.36) < p95 1
  • get_agent_activity_volatile_agent: uhCAkR_l726i0U5do8ZiBZM0SOKSCH32MCR8dVn3HW4cCNTEREn33
p5 0 < mean 0.43 (SD = 0.49) < p95 1
  • get_agent_activity_volatile_agent: uhCAkRlvEY00dd6jmFZ4qXnIWk_1vM6apmMvqIsRjClLmpVAnPs2I
p5 0 < mean 0.33 (SD = 0.47) < p95 1
  • get_agent_activity_volatile_agent: uhCAkS4DuaJ6l8DfaUytqPaRpq0evbY3kWrnqJZAknSkwii5i4VJf
p5 0 < mean 0.57 (SD = 0.49) < p95 1
  • get_agent_activity_volatile_agent: uhCAkS94NDQMnCL7KM0s8ZUDsJNXO9XnPSGK8KQJ24EEKm9gB2gbg
p5 0 < mean 0.33 (SD = 0.47) < p95 1
  • get_agent_activity_volatile_agent: uhCAkSE5m89GTB2L6ZkxYodKdX5c2yXvKEWYA89LihU1WUFlLL7sW
p5 0 < mean 0.57 (SD = 0.49) < p95 1
  • get_agent_activity_volatile_agent: uhCAkSGqn_tLJH0arUVsHUTqF-sCrek_l99WJxu9w0W9njXk4OcmT
p5 0 < mean 0.5 (SD = 0.5) < p95 1
  • get_agent_activity_volatile_agent: uhCAkUSM8K6trP9j-T4vLi5uZ0LEF7hbf4256BHu1t9ohZJVPpkDx
p5 0 < mean 0.44 (SD = 0.5) < p95 1
  • get_agent_activity_volatile_agent: uhCAkUW8PlI7fWQw5DEeYoMFhtaHV0kouVSdy0VFTQNIHcAOmfjpg
p5 0 < mean 0.4 (SD = 0.49) < p95 1
  • get_agent_activity_volatile_agent: uhCAkVYCDgW3czCwvU8nKIoHb0Jlr0FqJyGMr477xTVdUJJIzk0je
p5 0 < mean 0.25 (SD = 0.43) < p95 1
  • get_agent_activity_volatile_agent: uhCAkVrxB-lC6jhNue_8FLIqYka-MAfD_tG64nMeIft_ne6LMlWHf
p5 0 < mean 0.6 (SD = 0.49) < p95 1
  • get_agent_activity_volatile_agent: uhCAkW2GE1V21cjipaEmdZV4a2SlxWhqr1etPKulj7lkTqIGm0DFU
p5 0 < mean 0.27 (SD = 0.45) < p95 1
  • get_agent_activity_volatile_agent: uhCAkWDyavleFY-q7JFjAUviQJcyHBq6TkgYo3wsV19MbJFMdw-vg
p5 0 < mean 0.25 (SD = 0.43) < p95 1
  • get_agent_activity_volatile_agent: uhCAkWySq8DPHyLN2ViC7a8VPtcEytp-nRAslvUtb08xpmYQ_KBBl
p5 0 < mean 0.18 (SD = 0.39) < p95 1
  • get_agent_activity_volatile_agent: uhCAkYFrUHeRqIDZCrykqh4mnk-8usqs2E9ROqnSqytHP75k0DICO
p5 0 < mean 0.43 (SD = 0.49) < p95 1
  • get_agent_activity_volatile_agent: uhCAkYH5hkQ9m_8tQCHOFP_hYuOFB7FEqRxmQ_W7t1A9l5rBOs5Kr
p5 0 < mean 0.38 (SD = 0.48) < p95 1
  • get_agent_activity_volatile_agent: uhCAkY__Y2FscfOKIslln5TTWHFf_5-VHHWevzhDlfBMd62snK9RD
p5 0 < mean 0.2 (SD = 0.4) < p95 1
  • get_agent_activity_volatile_agent: uhCAkYqQ2bEHwLnHmzZfXBFd781zWirZGifDQa2YPoE_ypXZ7hQLm
p5 0 < mean 0.38 (SD = 0.48) < p95 1
  • get_agent_activity_volatile_agent: uhCAk_8hpmMD2tMWt9ZgoNDlwC0y1hsx_WncPYA7wxg6ZM-o7Cy3o
p5 0 < mean 0.38 (SD = 0.48) < p95 1
  • get_agent_activity_volatile_agent: uhCAkaWUiH7dU8JLHmFuxOOVuV8Chd5OxkGHrmol0zUeSYP0ju-eP
p5 0 < mean 0.43 (SD = 0.49) < p95 1
  • get_agent_activity_volatile_agent: uhCAkacvDoy3v719pwgqu2EGbdeQFHGDajbpGPBI-9rehC10S4Zrs
p5 0 < mean 0.07 (SD = 0.26) < p95 1
  • get_agent_activity_volatile_agent: uhCAkaeox-2F3EHwOdKNWyc4EiU37K6ju9BRK0mJS97-p6PYlM8Vn
p5 0 mean 0 (SD = 0) p95 0
  • get_agent_activity_volatile_agent: uhCAkafhSFsEaqbs5gNf1ybp2ezDI7gfnuHr2WlFogL8lvHdYCA7c
p5 0 < mean 0.38 (SD = 0.48) < p95 1
  • get_agent_activity_volatile_agent: uhCAkb36ZH4VRvyo6A1Q4TEoXikRoja4Nx44vZpBi3cjO-x-bsFhb
p5 0 < mean 0.4 (SD = 0.49) < p95 1
  • get_agent_activity_volatile_agent: uhCAkbHDghvlK6MSiefAfleum6JjIwnpkBTTWAuUlNuIKA2MhJpT4
p5 0 mean 0 (SD = 0) p95 0
  • get_agent_activity_volatile_agent: uhCAkbTW3WX6pRIuYiQa38XWVHVfL_iBTqR8_8WMwo8ap1IZOTBxb
p5 0 < mean 0.4 (SD = 0.49) < p95 1
  • get_agent_activity_volatile_agent: uhCAkbeuVSbfSVkS7gJY5N4Ch29dFqbPREiI06M0MVn9ow3t6-cqZ
p5 0 < mean 0.14 (SD = 0.35) < p95 1
  • get_agent_activity_volatile_agent: uhCAkboJq6I0PkfL3-nmHN3SVn_oHCsH5JEpGunsRVd6c82Z_rC8t
p5 0 < mean 0.33 (SD = 0.47) < p95 1
  • get_agent_activity_volatile_agent: uhCAkbogCfqLIvUPaGbkKDApp3L69Ny6rJXkaJmVke9zNZLrkt84F
p5 0 < mean 0.44 (SD = 0.5) < p95 1
  • get_agent_activity_volatile_agent: uhCAkbucEzeVe4vXLqrVCQnGjI9MmQvdlyVw2cJX7k3nTbnjgfvu8
p5 0 < mean 0.5 (SD = 0.5) < p95 1
  • get_agent_activity_volatile_agent: uhCAkcoYA9wodFp4jWkRNc5iNgFg5AiYIs6X44dOHqco7yQ_6_Fgg
p5 0 < mean 0.57 (SD = 0.49) < p95 1
  • get_agent_activity_volatile_agent: uhCAkcsjMXH55Og61EtoE0tfcvHTlvP17QKF-RjbdKzg9_C6sjCGy
p5 0 < mean 0.11 (SD = 0.31) < p95 1
  • get_agent_activity_volatile_agent: uhCAkdIVNTnnR9tUdE4uT8kwDY9sGlHP136O32poDHM5SL4ff5cjj
p5 0 < mean 0.1 (SD = 0.3) < p95 1
  • get_agent_activity_volatile_agent: uhCAkdSSnCnRt64y9rv39W1dPoqvPDK-4qg46wQ8CXEGk66Xqi8o2
p5 0 < mean 0.4 (SD = 0.49) < p95 1
  • get_agent_activity_volatile_agent: uhCAkeQCbZuCDeR8b-HkAsigKCW_6cCyWLvqJFQQVuG_0p9-7YpUD
p5 0 < mean 0.5 (SD = 0.5) < p95 1
  • get_agent_activity_volatile_agent: uhCAkevctXs8qqtCJFO53xWD9pr3Dxj-8eVPk0yvXOXMwkIeC8b8J
p5 0 < mean 0.4 (SD = 0.49) < p95 1
  • get_agent_activity_volatile_agent: uhCAkh81clHGn1D7pa-b34mMZ5Z-wBCb-nfJHzSgt4usPl6--kik5
p5 0 < mean 0.33 (SD = 0.47) < p95 1
  • get_agent_activity_volatile_agent: uhCAkhbwNd9693E2sg925CQUXMx68q1_W0E_kBlXyFmyuicuJUxsR
p5 0 < mean 0.43 (SD = 0.49) < p95 1
  • get_agent_activity_volatile_agent: uhCAkiCJbeayPLvUo3wx8GeiLYlRP0Ayv_V1gQuAzEUXbYlY9hmyT
p5 0 < mean 0.5 (SD = 0.5) < p95 1
  • get_agent_activity_volatile_agent: uhCAkicVB1oP_OULBEZW2UriOHWACzuyAkk1_PHX-HFcS8ntZE3NB
p5 0 < mean 0.5 (SD = 0.5) < p95 1
  • get_agent_activity_volatile_agent: uhCAkidah41cSHna_stloCsOPF_L43FSHbIc5Af4QUkoDwZZxEL7m
p5 0 < mean 0.29 (SD = 0.45) < p95 1
  • get_agent_activity_volatile_agent: uhCAkiuoov32Px0sXlCkh9NLRJsJzNjq5w92OF7u27xj_8CyrH8MA
p5 0 mean 0 (SD = 0) p95 0
  • get_agent_activity_volatile_agent: uhCAkiv1V615Obw35tSv4OXFPmZQa2eZvj9HPF0I0WG0ArDR8FT2p
p5 0 < mean 0.43 (SD = 0.49) < p95 1
  • get_agent_activity_volatile_agent: uhCAkjEHE8En5gWS80HaomDxr_BOb3KYOeFv23C2lZ_g9vNjHe8Do
p5 0 < mean 0.14 (SD = 0.35) < p95 1
  • get_agent_activity_volatile_agent: uhCAkjG8vIit6lwfBQQHmmhh2__9RJ7oF-szefdfENgtHpEv-Unyj
p5 0 < mean 0.57 (SD = 0.49) < p95 1
  • get_agent_activity_volatile_agent: uhCAkjIC2dd3u1yHlKkATJeNpLFICyzvIZUPEKvOlnwjBQWuZmBJS
p5 0 < mean 0.1 (SD = 0.3) < p95 1
  • get_agent_activity_volatile_agent: uhCAkje76e9xp6CXn_5n7qcCqSQVdKzRCXhMCGLGdauZihPFXvmdp
p5 0 < mean 0.4 (SD = 0.49) < p95 1
  • get_agent_activity_volatile_agent: uhCAkkEDW279_pzjFEsFTtinp6ItQxrDPi349UZ-zXDH4yiuGGGeS
p5 0 < mean 0.5 (SD = 0.5) < p95 1
  • get_agent_activity_volatile_agent: uhCAkkGDo-BLuEpqfq_NUTYr-yBcWTt8XT5z1PAP7D3O6OjXPQL_a
p5 0 < mean 0.6 (SD = 0.49) < p95 1
  • get_agent_activity_volatile_agent: uhCAkkabfseJ9pSQ5pCjEpdvvQRYUfoa5EgAwDuF1FgNplROpdIsS
p5 0 < mean 0.14 (SD = 0.35) < p95 1
  • get_agent_activity_volatile_agent: uhCAklHuueSRxFo5NZ1dabw1dUSRIaepI92Y1f1nqYQHigkkWL9ok
p5 0 < mean 0.5 (SD = 0.5) < p95 1
  • get_agent_activity_volatile_agent: uhCAklOrLUKc4HWe6O_LCO13cNIJiyZbXieduaiw8WU5p5g3Mc0Kq
p5 0 < mean 0.13 (SD = 0.33) < p95 1
  • get_agent_activity_volatile_agent: uhCAklyffjmpoJqaX30q7x8IkULWQ7WmIZwBwm8f2472SVK_JqH3I
p5 0 < mean 0.07 (SD = 0.26) < p95 1
  • get_agent_activity_volatile_agent: uhCAkm1drQhxNtrnqkDtuK2fblsOVABUVkPBbMAvexXu9jyhvc9XD
p5 0 < mean 0.17 (SD = 0.37) < p95 1
  • get_agent_activity_volatile_agent: uhCAkm6FFQrv7OcKx9p82pQ4dSP_MBqNm7phykqabsZZESCkl6Qeb
p5 0 < mean 0.63 (SD = 0.48) < p95 1
  • get_agent_activity_volatile_agent: uhCAkmTSX8arkWt1OYt1O9A90E2I3Ah3YuSs2LSZWSXbnYJvLl4kL
p5 0 < mean 0.4 (SD = 0.49) < p95 1
  • get_agent_activity_volatile_agent: uhCAkmnb5VZKRg_XvWtBu-z2mZltRSt90OPoBCYb8Sw6xhFig9FR7
p5 0 < mean 0.38 (SD = 0.48) < p95 1
  • get_agent_activity_volatile_agent: uhCAknAFJd1JSiXb5UCMlTOrGNnY5YABNFnXpYUDp0v_kzDYi8pSp
p5 0 < mean 0.43 (SD = 0.49) < p95 1
  • get_agent_activity_volatile_agent: uhCAknDCCfYtwFdoUJlgBLEwhYp6RGmTTNfzAFeFZkhUqvw8stzOV
p5 0 < mean 0.5 (SD = 0.5) < p95 1
  • get_agent_activity_volatile_agent: uhCAknVU6Lcl-b8V9a1b5SYzImqYqC4UzQgN-2Y83eLvThUeC1M2R
p5 0 < mean 0.63 (SD = 0.48) < p95 1
  • get_agent_activity_volatile_agent: uhCAkoyik__qovKkUNygbAetE_EF7cLGFAqplNRaK6jidZywdzlXN
p5 0 < mean 0.09 (SD = 0.29) < p95 1
  • get_agent_activity_volatile_agent: uhCAkpZjFLEqSp8-A6SaNwAiLj2lRPE5l1qzCinO3NkgOz2ajRQdv
p5 0 < mean 0.38 (SD = 0.48) < p95 1
  • get_agent_activity_volatile_agent: uhCAkpbBvn0_7DVq79dYJA-GevjUmWRfBVR3IkpNU_LKist7ohZgW
p5 0 mean 0 (SD = 0) p95 0
  • get_agent_activity_volatile_agent: uhCAkpvMQGNjkBmVMb8u53RkMPjw1w7lYRc62s04HtW6do2TePCUF
p5 0 < mean 0.09 (SD = 0.29) < p95 1
  • get_agent_activity_volatile_agent: uhCAkq72f3wLG8_XCn67EknNQn7I7UNnFsiJ56ydq2K00dVKLswc8
p5 0 < mean 0.5 (SD = 0.5) < p95 1
  • get_agent_activity_volatile_agent: uhCAkq9cC9_I44Kh3hbs1MC0d7cmjVg8BFrI90cu_W07kPGpXN38l
p5 0 < mean 0.56 (SD = 0.5) < p95 1
  • get_agent_activity_volatile_agent: uhCAkqEuYz7TxlbL_VeO-UFlI4Uz3A2iMTqS46oLRrFjrZ2G3Tatk
p5 0 < mean 0.43 (SD = 0.49) < p95 1
  • get_agent_activity_volatile_agent: uhCAkqcMDxGcPEaYUezI3KT1kisrOd9N3QIH1Hz9tuwMoukSdEjik
p5 0 < mean 0.6 (SD = 0.49) < p95 1
  • get_agent_activity_volatile_agent: uhCAkrX3l88wMCBCNf7Cad35PpA2RhKMcCeTWls_nnZtqUOp4vsC6
p5 0 < mean 0.43 (SD = 0.49) < p95 1
  • get_agent_activity_volatile_agent: uhCAktF4DOgg9ROV_C0Ot3fz8QScbW0A_sWpEriV2n3qK9zUybPmG
p5 0 < mean 0.6 (SD = 0.49) < p95 1
  • get_agent_activity_volatile_agent: uhCAktYcd2ZWVSqA-otLJEk7ju5_qLXTMCGSaOVlYdC02ghFtaS6A
p5 0 < mean 0.36 (SD = 0.48) < p95 1
  • get_agent_activity_volatile_agent: uhCAkt_MY7HtQ03gKGX03xD9wg68ktEoAboYpzDnWk43CJff3AYYn
p5 0 < mean 0.2 (SD = 0.4) < p95 1
  • get_agent_activity_volatile_agent: uhCAkuoqN_XtHKhdE53euMjzwEVqAjqzacmRZg92qY7RcgzzBeqP1
p5 0 < mean 0.08 (SD = 0.27) < p95 1
  • get_agent_activity_volatile_agent: uhCAkurRVm6D6T__MCwqsPxj-JbFo5xgTEyEXV28xPOA71kP_XRFP
p5 0 < mean 0.17 (SD = 0.37) < p95 1
  • get_agent_activity_volatile_agent: uhCAkus6PWzkqQJ9lijJaRoSkwHMOJlVOxZ6EfrGzu2ZimpL58jwh
p5 0 < mean 0.5 (SD = 0.5) < p95 1
  • get_agent_activity_volatile_agent: uhCAkutxEoMVG6MfnU4RLGh6lE9gx_lXILzbHamVYLQUqOXqrBS9v
p5 0 < mean 0.5 (SD = 0.5) < p95 1
  • get_agent_activity_volatile_agent: uhCAkuvE_NErmkFprs3M9XAkJXKsTv7zQAfId2RCdplWxh4kLeOa6
p5 0 < mean 0.25 (SD = 0.43) < p95 1
  • get_agent_activity_volatile_agent: uhCAkv627rFjETUWy-X4UzACKeZS7k66-ffgYYxlIcQEVg-V7qY_J
p5 0 < mean 0.8 (SD = 0.4) < p95 1
  • get_agent_activity_volatile_agent: uhCAkvSPd0KCId13hRrFMDsmH21glVJEHLswdM5qbbjeuYQbm3709
p5 0 mean 0 (SD = 0) p95 0
  • get_agent_activity_volatile_agent: uhCAkvStp16QyE0EdIfSBz4ThIpRwWcXmoP33XcphmvRDi8j9hswn
p5 0 < mean 0.6 (SD = 0.49) < p95 1
  • get_agent_activity_volatile_agent: uhCAkvkZbTestkvRGvwQjibcmnchCaqotQVSGZgXMOIkQg-qA1BMP
p5 0 < mean 0.21 (SD = 0.41) < p95 1
  • get_agent_activity_volatile_agent: uhCAkw9S4qsp2m7jEKVQQhBPmk3ISyd-Ous7lwHYo4jOtNXQoNdxU
p5 0 < mean 0.18 (SD = 0.39) < p95 1
  • get_agent_activity_volatile_agent: uhCAkwY-FW0eBvBPjBNfwCf--DK3SWis5OsiXb_So2zWLfCpxq2BC
p5 0 < mean 0.67 (SD = 0.47) < p95 1
  • get_agent_activity_volatile_agent: uhCAkwzJqUEpFYUBJm19cS6p4LvNX6Y33oAHHqUxDbokdBKAFXIxE
p5 0 < mean 0.43 (SD = 0.49) < p95 1
  • get_agent_activity_volatile_agent: uhCAkxBWByUrsF3sX43dW1jtBDKqIVoBGGwCgnren9AKoGgRRJ_D7
p5 0 < mean 0.43 (SD = 0.49) < p95 1
  • get_agent_activity_volatile_agent: uhCAkxHrJ75d4WDX2gLgknoAqYIOSErGvxhqaem_hoBYp7iMxPrDD
p5 0 < mean 0.07 (SD = 0.26) < p95 1
  • get_agent_activity_volatile_agent: uhCAkxWJakfh2f0h7nJ0iRT8rv58lFDOg5qEPo3srnCiO8T1_I7C0
p5 0 < mean 0.18 (SD = 0.39) < p95 1
  • get_agent_activity_volatile_agent: uhCAkxy1ERm1xxAlTBpX20sb9ul7KyK9d7frN4KCLQEOpabK1BF1A
p5 0 < mean 0.2 (SD = 0.4) < p95 1
  • get_agent_activity_volatile_agent: uhCAky5MfUPxHX58A_0ii9A8A1buz5udmKd-wELYi6mNfzHC_pmAU
p5 0 < mean 0.38 (SD = 0.48) < p95 1
  • get_agent_activity_volatile_agent: uhCAky7rvLwpDsP-DYaJ6ksWJ3Nad0tbA-JHqAxn_Fb9RzNGyX9Eo
p5 0 < mean 0.25 (SD = 0.43) < p95 1
  • get_agent_activity_volatile_agent: uhCAkyOfUyQD9Vv8ISbMfi2kgkDKXXzbTZm9B4UmYs7JFAzBMDQ0p
p5 0 mean 0 (SD = 0) p95 0
  • get_agent_activity_volatile_agent: uhCAkyPB0H3yLDojWTlFIq4bBy9fXtz0XGB5LG1UEgyfykFowdg2n
p5 0 < mean 0.33 (SD = 0.47) < p95 1
  • get_agent_activity_volatile_agent: uhCAkyTeuLsXjcHSuAdEx62RtPGAflmMa-_3Yzk13K3IZHK8mDE21
p5 0 < mean 0.08 (SD = 0.27) < p95 1
  • get_agent_activity_volatile_agent: uhCAkyV13X6Yjiyz13t2xsAWAi1E0lbswu8nLvI6cHXl2RnBjqouU
p5 0 < mean 0.63 (SD = 0.48) < p95 1
  • get_agent_activity_volatile_agent: uhCAkzs1343KRW7bv5M7wnRIrk1ghOdZyc6eT2UzCZGUhbmsPsB14
p5 0 < mean 0.67 (SD = 0.47) < p95 1
Error count
The number of errors accumulated.
1280

Write Validated must_get_agent_activity

A scenario where write agents create entries in batches of 10, while must_get_agent_activity agents each pick a random write agent and repeatedly attempt to create an entry that references the chain top of their latest batch. This reference means that the entry's validation function needs to make a must_get_agent_activity call.

The purpose of this scenario is to measure the time it takes for published agent activity data to be gossiped among authorities and become available to peers that query it via must_get_agent_activity.

This test is similar to Mixed-Arc must_get_agent_activity, but all agents are full-arc.

Started
Sat, 21 Mar 2026 19:50:16 UTC
Peer count
250
Peer count at end
5977
Behaviours
  • must_get_agent_activity (1 agent)
  • write (1 agent)
Holochain version
0.6.1-rc.3
Wind Tunnel version
0.6.0
Run ID
write_validated_must_get_agent_activity_23385213145
Highest observed action_seq
The maximum chain length observed per write agent, aggregated across all reading agents. The reading dimension is collapsed by taking the maximum observed value for each write agent (any successful read counts as propagation). This reflects the DHT’s ability to propagate agent activity ops and make them available to querying peers.
Mean max (across write agents)
Mean of the highest chain head value observed per write agent. Each write agent contributes its maximum — the value seen by any reader.
489.96
Max
Highest chain head value observed for any single write agent.
1534
Write agents observed
23
Chain batch delay timing
The time between a write agent's creation of a batch and a must_get_agent_activity agent's successful discovery of the batch and creation/self-validation of a new entry that depends on it.
Mean
mean 335.301241s (SD = 31.559515s)
Max
Highest per-partition mean latency
max mean 1256.112s
Min
Lowest per-partition mean latency
min mean 12.977667s
Chain batch delay rate
The rate at which a must_get_agent_activity agent was able to discover batches and create/self-validate new entries that depend on them.
Mean
mean 1.14/s
Max
Highest per-partition mean rate
max 8.6/s
Min
Lowest per-partition mean rate
min 0/s
create_validated_sample_entry zome call timing
The time taken to complete the zome function call that creates the entry that depends on a write agent's source chain.
Mean
mean 0.374417s (SD = 0.099348s)
Max
Highest per-partition mean latency
max mean 5.101876s
Min
Lowest per-partition mean latency
min mean 0.031258s
Retrieval errors
A running accumulation of the errors encountered by an agent when attempting to self-validate actions that depend on must_get_agent_activity calls.
Total
42567
Agents affected
105 / 106
Mean per agent
Total count divided by number of partitions, rounded to nearest whole number
402
Max
Highest count in any single partition
862
Min
Lowest count in any single partition
0
Final error count
The total number of all types of error accumulated over the run by all agents.
342651
Host Metrics
User CPU usage
CPU usage by user space
36.00%
Network receive rate (primary)
Rate of bytes received on primary network interface
p5 69.81 KiB/s < mean 654.69 KiB/s (SD = 567.13 KiB/s) < p951.46 MiB/s
Network send rate (primary)
Rate of bytes sent on primary network interface
p5 4.26 KiB/s < mean 92.09 KiB/s (SD = 161.93 KiB/s) < p95281.50 KiB/s
Total bytes received
Total bytes received on primary network interface
count
140.78 GiB
mean
138.62 MiB/s
Total bytes sent
Total bytes sent on primary network interface
count
19.80 GiB
mean
19.50 MiB/s
CPU spike anomaly
CPU spike anomaly was detected
Detected

Warning CPU p99 reached 94.8%

Memory leak anomaly
Memory leak anomaly was detected
Detected

Warning Memory growing at 482.60 MB/s

Disk full anomaly
Disk full anomaly was detected
NotDetected
Swap thrashing anomaly
Swap thrashing anomaly was detected
Detected

Critical Heavy swap usage (80.6% swap used)

System overload anomaly
System overload anomaly was detected
Detected

Warning 4% of hosts overloaded (load5/ncpus > 1.0)

Additional Host Metrics
Hidden by default. Click to toggle visibility.
CPU usage
Total CPU usage and kernel CPU usage
Total
p5 3.59% < mean 53.18% (SD = 28.54%) < p95 90.12%
System
17.18%
CPU percentiles
CPU usage percentiles
p50
60.86%
p95
90.12%
p99
94.83%
CPU usage above 80%
Number of hosts above 80% CPU and mean time spent above threshold for those hosts
count
204 hosts
mean time
0.22s
Memory used percentage
Percentage of memory used
p5 7.37% < mean 14.55% (SD = 8.65%) < p95 29.31%
Memory available percentage
Percentage of available memory
p5 70.69% < mean 85.45% (SD = 8.65%) < p95 92.63%
Max host memory used
Maximum memory usage percentage across all hosts
max 80.60%
Max host swap used percentage
Maximum swap space usage percentage across all hosts
max 80.63%
Memory growth rate
Rate of memory growth over time
growth 482.60MB/s
Disk read throughput
Disk read throughput in MB/s
0.93MB/s
Disk write throughput
Disk write throughput in MB/s
439.73MB/s
Disk space utilization risk
Number of hosts nearing disk space capacity by mount point
Mount Point /
0/209hosts
Mount Point /boot
0/2hosts
Mount Point /efi-boot
0/8hosts
Mount Point /etc/hostname
0/27hosts
Mount Point /etc/hosts
0/27hosts
Mount Point /etc/resolv.conf
0/27hosts
Mount Point /nix/store
0/15hosts
System load average
System load averages over 1, 5, and 15 minutes. This is an unnormalised value not divided by number of CPUs, so it is only meaningful if all machines have the same core count.
1 min
2.26
5 min
1.57
15 min
1.01
CPU overloaded hosts
Percentage of hosts that experienced CPU overload
4.20%
CPU pressure
CPU pressure over 10, 60, and 300 second averages
10 second average
p5 0% < mean 28.7869% (SD = 24.5157%) < p95 72.67%
60 second average
26.7088%
300 second average
18.0683%
Memory pressure some
Memory pressure (some tasks stalled) over 10, 60, and 300 second averages
10 second average
p5 0% mean 0% (SD = 0%) p95 0%
60 second average
0.0000%
300 second average
0.0000%
Memory pressure full
Memory pressure (all tasks stalled) over 10, 60, and 300 second averages
10 second average
p5 0% mean 0% (SD = 0%) p95 0%
60 second average
0.0000%
300 second average
0.0000%
I/O pressure some
I/O pressure (some tasks stalled) over 10, 60 and 300 second averages
10 second average
p5 0% < mean 6.7215% (SD = 8.7092%) < p95 22.03%
60 second average
6.3707%
300 second average
4.5166%
I/O pressure full
I/O pressure (all tasks stalled) over 10, 60 and 300 second averages
10 second average
p5 0% < mean 4.5184% (SD = 7.5168%) < p95 19.52%
60 second average
4.3036%
300 second average
3.1508%
Holochain process CPU usage
CPU usage by Holochain process
p5 1.71% < mean 50.89% (SD = 27.35%) < p95 86.24%
Holochain process memory (PSS)
Proportional Set Size memory of Holochain process
p5 220.40 KiB < mean 453.52 KiB (SD = 119.59 KiB) < p95628.70 KiB
Holochain process threads
Number of threads in Holochain process
p5 11 threads < mean 31.57 threads (SD = 12.68 threads) < p95 49 threads
Holochain process file descriptors
Number of file descriptors used by Holochain process
p5 48 file descriptors < mean 85.4 file descriptors (SD = 21.47 file descriptors) < p95 115 file descriptors