Skip to content

Instantly share code, notes, and snippets.

@yoursunny
Created February 16, 2026 00:17
Show Gist options
  • Select an option

  • Save yoursunny/3d4a6b6da09cd89230b118934a4fa99c to your computer and use it in GitHub Desktop.

Select an option

Save yoursunny/3d4a6b6da09cd89230b118934a4fa99c to your computer and use it in GitHub Desktop.
Freaky Fast Digital Coma - an art piece for the love of DartNode and to the respect of Daniel https://freaky-fast-digital-coma.yoursunny.dev/
<!DOCTYPE html>
<meta charset="utf-8">
<title>Freaky Fast Digital Coma</title>
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="stylesheet" href="https://raj457036.github.io/attriCSS/themes/midnight-green.css">
<style>
body section[container] {
max-width: max(80vw, 38rem);
}
.back-btn {
position: fixed;
top: 3rem;
right: 1rem;
padding: 0.5rem 1rem;
background: var(--nav-bg, #002d2d);
color: var(--nav-color, #fff);
text-decoration: none;
border-radius: 4px;
font-weight: bold;
border: 1px solid rgba(255,255,255,0.2);
z-index: 100;
}
.back-btn:hover {
background: #004d4d;
}
</style>
<script async src="https://www.googletagmanager.com/gtag/js?id=G-MLZ5G2C4X2"></script><script>window.dataLayer=[];function gtag(){dataLayer.push(arguments);}if(location.hostname.endsWith(".yoursunny.dev")){gtag("js",new Date());gtag("config","G-MLZ5G2C4X2");}</script>
<nav>
<header><a href="#home">Freaky Fast Digital Coma</a></header>
</nav>
<a href="#" class="back-btn">× Close</a>
<section id="home" container>
<h1>Freaky Fast Digital Coma</h1>
<p style="text-align: right;">&mdash; an art piece for the love of DartNode and to the respect of Daniel</p>
<ul>
<li><a href="https://serververify.com/benchmarks/a644cdaa-bcfe-43eb-95ea-63655245c286">ServerVerify benchmark</a>: <b style="color: #d77927;">D</b></li>
<li><a href="#yabs">YABS</a>: ✔ Online (/32) / ✔ Online (/128)</li>
<li><a href="#nws">nws.sh</a>: near-line-rate 10Gbps pure awesomeness.</li>
<li><a href="#lc">latency-check</a>: 9-second maximum clat is a digital coma.</li>
<li><a href="#mw">mixed-workload</a>: the 6 Gbps network is 2000x faster than the 3 Mbps disk.</li>
<li><a href="#mt">max-throughput</a>: big reads is 1000x faster than small ones, it's a Ferrari with square wheels.</li>
<li><a href="#sw">sustained-write</a>: 5 IOPS, 2001 called they want their USB 1.1 flash drive back.</li>
</ul>
<p><small>
Big thanks to <i>mentally strong</i> Daniel for the 14-hour shifts.
This page is a tribute to the hustle and grind, not a complaint about the hardware.
</small></p>
</section>
<section id="yabs" container>
<h2>Yet-Another-Bench-Script</h2>
<p>
This baseline report establishes the "High-Speed Paradox" of the Houston node.
While the <b>Intel Xeon E5-2697A v4</b> provides stable, albeit legacy, compute cycles, the storage subsystem exhibits a <b>catastrophic drop-off</b> in performance as block sizes increase.
The initial 4k random I/O results (~3.78 MB/s) hint at severe underlying contention, while the <b>iperf3</b> results reveal a network stack that is significantly over-provisioned relative to the storage tier, hinting at the 6-9 Gbps "Freaky Fast" reality discovered in later networking-specific tests.
</p>
<pre><code>
# ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
# Yet-Another-Bench-Script #
# v2025-04-20 #
# https://github.com/masonr/yet-another-bench-script #
# ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
Sun Feb 15 14:45:06 UTC 2026
Basic System Information:
---------------------------------
Uptime : 0 days, 10 hours, 12 minutes
Processor : Intel(R) Xeon(R) CPU E5-2697A v4 @ 2.60GHz
CPU cores : 2 @ 2599.996 MHz
AES-NI : ✔ Enabled
VM-x/AMD-V : ✔ Enabled
RAM : 3.8 GiB
Swap : 0.0 KiB
Disk : 99.9 GiB
Distro : Debian GNU/Linux 13 (trixie)
Kernel : 6.12.69+deb13-amd64
VM Type : KVM
IPv4/IPv6 : ✔ Online / ✔ Online
IPv6 Network Information:
---------------------------------
ISP : Snaju Development
ASN : AS399646 Snaju Development
Location : Houston, Texas (TX)
Country : United States
fio Disk Speed Tests (Mixed R/W 50/50) (Partition /dev/sda5):
---------------------------------
Block Size | 4k (IOPS) | 64k (IOPS)
------ | --- ---- | ---- ----
Read | 1.88 MB/s (471) | 23.34 MB/s (364)
Write | 1.90 MB/s (476) | 23.83 MB/s (372)
Total | 3.78 MB/s (947) | 47.17 MB/s (736)
| |
Block Size | 512k (IOPS) | 1m (IOPS)
------ | --- ---- | ---- ----
Read | 14.08 MB/s (27) | 4.89 MB/s (4)
Write | 15.29 MB/s (29) | 5.11 MB/s (4)
Total | 29.38 MB/s (56) | 10.00 MB/s (8)
iperf3 Network Speed Tests (IPv4):
---------------------------------
Provider | Location (Link) | Send Speed | Recv Speed | Ping
----- | ----- | ---- | ---- | ----
Clouvider | London, UK (10G) | 1.73 Gbits/sec | 2.06 Gbits/sec | 107 ms
Eranium | Amsterdam, NL (100G) | 1.70 Gbits/sec | 1.84 Gbits/sec | 115 ms
Uztelecom | Tashkent, UZ (10G) | 880 Mbits/sec | 1.00 Gbits/sec | 209 ms
Leaseweb | Singapore, SG (10G) | 2.65 Mbits/sec | 731 Mbits/sec | 220 ms
Clouvider | Los Angeles, CA, US (10G) | 3.14 Gbits/sec | 5.64 Gbits/sec | 38.1 ms
Leaseweb | NYC, NY, US (10G) | 3.44 Gbits/sec | 6.05 Gbits/sec | 39.1 ms
Edgoo | Sao Paulo, BR (1G) | 1.16 Gbits/sec | 1.65 Gbits/sec | 144 ms
iperf3 Network Speed Tests (IPv6):
---------------------------------
Provider | Location (Link) | Send Speed | Recv Speed | Ping
----- | ----- | ---- | ---- | ----
Clouvider | London, UK (10G) | 1.71 Gbits/sec | 2.00 Gbits/sec | 107 ms
Eranium | Amsterdam, NL (100G) | 1.38 Gbits/sec | 2.02 Gbits/sec | 114 ms
Uztelecom | Tashkent, UZ (10G) | 988 Mbits/sec | 833 Mbits/sec | 209 ms
Leaseweb | Singapore, SG (10G) | 903 Mbits/sec | 965 Mbits/sec | 221 ms
Clouvider | Los Angeles, CA, US (10G) | 3.02 Gbits/sec | 4.79 Gbits/sec | 38.1 ms
Leaseweb | NYC, NY, US (10G) | 3.82 Gbits/sec | 5.63 Gbits/sec | 39.1 ms
Edgoo | Sao Paulo, BR (1G) | 1.57 Gbits/sec | 1.65 Gbits/sec | 144 ms
Geekbench 6 Benchmark Test:
---------------------------------
Test | Value
|
Single Core | 797
Multi Core | 1466
Full Test | https://browser.geekbench.com/v6/cpu/16595267
YABS completed in 22 min 51 sec
</code></pre>
</section>
<section id="nws" container>
<h2>nws.sh SpeedTest</h2>
<p>
This report illustrates a massive disparity between network throughput and storage I/O.
The VPS achieves <b>near-line-rate 10Gbps</b> performance (9.2 Gbps peak) across the North American backbone.
However, with a disk write speed of ~6MB/s, the server can receive data <b>125 times</b> faster than it can persist it to non-volatile storage, creating a "Network Firehose vs. Soda Straw" bottleneck.
</p>
<pre><code>
---------------------------------- nws.sh ---------------------------------
A simple script to bench network performance using speedtest-cli
---------------------------------------------------------------------------
Version : v2025.11.07
Global Speedtest : wget -qO- nws.sh | bash
Region Speedtest : wget -qO- nws.sh | bash -s -- -r <region>
Ping & Routing : wget -qO- nws.sh | bash -s -- -rt <region>
---------------------------------------------------------------------------
Basic System Info
---------------------------------------------------------------------------
CPU Model : Intel(R) Xeon(R) CPU E5-2697A v4 @ 2.60GHz
CPU Cores : 2 @ 2599.996 MHz
CPU Cache : 16384 KB
AES-NI : ✔ Enabled
VM-x/AMD-V : ✔ Enabled
Total Disk : 99.9 GB (18.0 GB Used)
Total RAM : 3.8 GB (383.0 MB Used)
System uptime : 0 days, 15 hour 57 min
Load average : 0.00, 0.01, 0.00
OS : Debian GNU/Linux 13
Arch : x86_64 (64 Bit)
Kernel : 6.12.69+deb13-amd64
Virtualization : KVM
TCP Control :
---------------------------------------------------------------------------
Basic Network Info
---------------------------------------------------------------------------
Primary Network : IPv6
IPv6 Access : ✔ Online
IPv4 Access : ✔ Online
ISP : Snaju Development
ASN : AS399646 Snaju Development
Location : Houston, Texas-TX, United States
---------------------------------------------------------------------------
Speedtest.net (Region: GLOBAL)
---------------------------------------------------------------------------
Location Latency Loss DL Speed UP Speed Server
ISP: Snaju Development
Nearest 0.82 ms 0.0% 7681.19 Mbps 8798.55 Mbps EZEE Fiber - Pearland, TX
Bangalore, IN 260.12 ms 0.0% 3003.49 Mbps 360.18 Mbps Bharti Airtel Ltd - Bangalore
Chennai, IN 243.22 ms 0.0% 2433.25 Mbps 365.96 Mbps RailTel Corporation of India Ltd - Chennai
Mumbai, IN 262.68 ms 0.0% 3190.28 Mbps 410.78 Mbps Melbicom - Mumbai
Seattle, US 49.53 ms N/A 6360.45 Mbps 1765.35 Mbps Comcast - Seattle, WA
Los Angeles, US 32.61 ms 0.0% 5986.65 Mbps 1289.79 Mbps ReliableSite Hosting - Los Angeles, CA
Dallas, US 6.49 ms 0.0% 7422.04 Mbps 5277.40 Mbps Hivelocity - Dallas, TX
Miami, US 32.92 ms 0.0% 9237.08 Mbps 2043.17 Mbps Frontier - Miami, FL
New York, US 42.50 ms 0.0% 7538.65 Mbps 2072.02 Mbps GSL Networks - New York, NY
Toronto, CA 52.00 ms 0.0% 5786.89 Mbps 1518.38 Mbps Rogers - Toronto, ON
Mexico City, MX 75.18 ms 0.0% 5009.07 Mbps 354.46 Mbps INFINITUM - Ciudad de México
London, UK 107.52 ms 0.0% 6141.90 Mbps 712.69 Mbps VeloxServ Communications - London
Amsterdam, NL FAILED
Paris, FR 110.45 ms 0.0% 6951.10 Mbps 910.06 Mbps Scaleway - Paris
Frankfurt, DE 120.76 ms 0.0% 2856.11 Mbps 116.35 Mbps Clouvider Ltd - Frankfurt am Main
Warsaw, PL 134.67 ms 0.0% 5148.34 Mbps 769.64 Mbps Play - Warszawa
Bucharest, RO 147.72 ms 0.0% 6146.77 Mbps 727.13 Mbps Digi
Moscow, RU 147.07 ms 0.0% 3973.89 Mbps 453.36 Mbps Misaka Network, Inc. - Moscow
Jeddah, SA 172.88 ms 0.0% 5579.06 Mbps 631.37 Mbps Saudi Telecom Company
Dubai, AE 214.77 ms N/A 3999.75 Mbps 232.68 Mbps e& UAE - Dubai
Istanbul, TR 162.47 ms 0.0% 4086.55 Mbps 664.91 Mbps Turkcell - Istanbul
Tehran, IR 196.87 ms 0.0% 3612.59 Mbps 444.88 Mbps Irancell - Tehran
Cairo, EG 161.62 ms 0.0% 3611.94 Mbps 605.96 Mbps Telecom Egypt - Cairo
Tokyo, JP FAILED
Shanghai, CU-CN 212.54 ms N/A 3762.62 Mbps 0.49 Mbps China Unicom 5G - Shanghai
Hong Kong, CN 192.19 ms 0.0% 5048.05 Mbps 501.04 Mbps Misaka Network, Inc. - Hong Kong
Singapore, SG 231.04 ms 0.0% 3553.29 Mbps 403.69 Mbps ViewQwest - Singapore
Jakarta, ID 238.00 ms 0.0% 3618.42 Mbps 404.73 Mbps PT Solnet Indonesia - Jakarta
Sydney, AU 170.46 ms 0.0% 5844.41 Mbps 594.01 Mbps Aussie Broadband - Sydney
---------------------------------------------------------------------------
Avg DL Speed : 5095.70 Mbps
Avg UL Speed : 1201.08 Mbps
Total DL Data : 187.60 GB
Total UL Data : 37.33 GB
Total Data : 224.93 GB
---------------------------------------------------------------------------
Duration : 13 min 32 sec
System Time : 15/02/2026 - 20:43:15 UTC
Total Script Runs : 519073
---------------------------------------------------------------------------
Result : https://result.nws.sh/r/1771188195_NH4XES_GLOBAL.txt
---------------------------------------------------------------------------
</code></pre>
</section>
<section id="lc" container>
<h2>fio: latency-check</h2>
<p>
This test measures <b>uncached random access latency</b> using 4KB blocks at a queue depth of 1.
By bypassing the OS page cache (<kbd>--direct=1</kbd>), it reveals the raw seek time of the storage array.
The astronomical 9,029ms maximum latency indicates a severe I/O bottleneck, likely caused by hardware contention or a saturated storage controller in the Houston node.
</p>
<pre><code>
sunny@vps9:~$ fio --name=latency-check --rw=randread --size=128m --direct=1 --ioengine=libaio --bs=4k --numjobs=1 --iodepth=1 --runtime=60 --group_reporting
latency-check: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
fio-3.39
Starting 1 process
latency-check: Laying out IO file (1 file / 128MiB)
Jobs: 1 (f=1): [r(1)][97.4%][eta 00m:01s]
latency-check: (groupid=0, jobs=1): err= 0: pid=6157: Sun Feb 15 17:45:45 2026
read: IOPS=861, BW=3445KiB/s (3528kB/s)(128MiB/38043msec)
slat (usec): min=8, max=3900, avg=16.63, stdev=29.47
clat (usec): min=2, max=9029.6k, avg=1141.85, stdev=71048.37
lat (usec): min=144, max=9029.6k, avg=1158.48, stdev=71048.48
clat percentiles (usec):
| 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 147],
| 20.00th=[ 153], 30.00th=[ 159], 40.00th=[ 167],
| 50.00th=[ 176], 60.00th=[ 186], 70.00th=[ 198],
| 80.00th=[ 215], 90.00th=[ 245], 95.00th=[ 306],
| 99.00th=[ 1139], 99.50th=[ 1516], 99.90th=[ 4228],
| 99.95th=[ 22676], 99.99th=[2701132]
bw ( KiB/s): min= 0, max=21248, per=100.00%, avg=9736.96, stdev=7204.47, samples=26
iops : min= 0, max= 5312, avg=2434.23, stdev=1801.13, samples=26
lat (usec) : 4=0.01%, 50=0.01%, 100=0.02%, 250=90.62%, 500=7.21%
lat (usec) : 750=0.60%, 1000=0.35%
lat (msec) : 2=0.92%, 4=0.15%, 10=0.04%, 20=0.01%, 50=0.01%
lat (msec) : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
lat (msec) : 2000=0.01%, >=2000=0.02%
cpu : usr=0.65%, sys=2.23%, ctx=32768, majf=0, minf=10
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=32768,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
READ: bw=3445KiB/s (3528kB/s), 3445KiB/s-3445KiB/s (3528kB/s-3528kB/s), io=128MiB (134MB), run=38043-38043msec
Disk stats (read/write):
sda: ios=32753/69, sectors=262024/92595, merge=0/9, ticks=37475/237007, in_queue=276152, util=96.27%
</code></pre>
</section>
<section id="mw" container>
<h2>fio: mixed-workload</h2>
<p>
A <b>75/25 mixed R/W distribution</b> simulates a real-world application environment (e.g., a database or file-syncing service like Seafile).
The results demonstrate extreme <b>I/O starvation</b>; concurrent write operations cause read latencies to spiral into the thousands of milliseconds, effectively locking the system during standard metadata updates.
</p>
<pre><code>
sunny@vps9:~$ fio --name=mixed-workload --rw=randrw --rwmixread=75 --bs=4k --size=512m --direct=1 --ioengine=libaio --iodepth=16 --numjobs=4 --group_reporting --runtime=60
mixed-workload: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=16
fio-3.39
Starting 4 processes
mixed-workload: Laying out IO file (1 file / 512MiB)
mixed-workload: Laying out IO file (1 file / 512MiB)
mixed-workload: Laying out IO file (1 file / 512MiB)
mixed-workload: Laying out IO file (1 file / 512MiB)
Jobs: 4 (f=0): [f(4)][100.0%][r=95KiB/s,w=159KiB/s][r=23,w=39 IOPS][eta 00m:00s]
mixed-workload: (groupid=0, jobs=4): err= 0: pid=6165: Sun Feb 15 17:50:03 2026
read: IOPS=561, BW=2246KiB/s (2300kB/s)(136MiB/61890msec)
slat (usec): min=2, max=15725, avg=12.85, stdev=150.94
clat (usec): min=57, max=5557.2k, avg=58389.09, stdev=316793.90
lat (usec): min=170, max=5557.2k, avg=58401.94, stdev=316793.92
clat percentiles (usec):
| 1.00th=[ 367], 5.00th=[ 594], 10.00th=[ 734],
| 20.00th=[ 955], 30.00th=[ 1156], 40.00th=[ 1401],
| 50.00th=[ 1713], 60.00th=[ 2212], 70.00th=[ 3064],
| 80.00th=[ 5014], 90.00th=[ 29754], 95.00th=[ 350225],
| 99.00th=[1300235], 99.50th=[2499806], 99.90th=[3909092],
| 99.95th=[5200937], 99.99th=[5536482]
bw ( KiB/s): min= 32, max=84976, per=100.00%, avg=4636.06, stdev=2754.40, samples=240
iops : min= 8, max=21244, avg=1158.69, stdev=688.59, samples=240
write: IOPS=191, BW=768KiB/s (786kB/s)(46.4MiB/61890msec); 0 zone resets
slat (usec): min=2, max=10616, avg=13.97, stdev=115.37
clat (usec): min=113, max=9718.0k, avg=162600.56, stdev=800212.66
lat (usec): min=149, max=9718.0k, avg=162614.53, stdev=800213.13
clat percentiles (usec):
| 1.00th=[ 314], 5.00th=[ 494], 10.00th=[ 652],
| 20.00th=[ 898], 30.00th=[ 1172], 40.00th=[ 1500],
| 50.00th=[ 1975], 60.00th=[ 2704], 70.00th=[ 4178],
| 80.00th=[ 10159], 90.00th=[ 179307], 95.00th=[ 471860],
| 99.00th=[3909092], 99.50th=[7147095], 99.90th=[9328133],
| 99.95th=[9328133], 99.99th=[9596568]
bw ( KiB/s): min= 32, max=29016, per=100.00%, avg=1597.76, stdev=942.96, samples=238
iops : min= 8, max= 7254, avg=399.32, stdev=235.73, samples=238
lat (usec) : 100=0.01%, 250=0.21%, 500=3.21%, 750=8.02%, 1000=11.39%
lat (msec) : 2=32.12%, 4=19.62%, 10=10.09%, 20=2.58%, 50=3.90%
lat (msec) : 100=0.93%, 250=1.15%, 500=3.91%, 750=0.32%, 1000=0.69%
lat (msec) : 2000=0.55%, >=2000=1.30%
cpu : usr=0.12%, sys=0.25%, ctx=11914, majf=0, minf=46
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=99.9%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=34750,11876,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=16
Run status group 0 (all jobs):
READ: bw=2246KiB/s (2300kB/s), 2246KiB/s-2246KiB/s (2300kB/s-2300kB/s), io=136MiB (142MB), run=61890-61890msec
WRITE: bw=768KiB/s (786kB/s), 768KiB/s-768KiB/s (786kB/s-786kB/s), io=46.4MiB (48.6MB), run=61890-61890msec
Disk stats (read/write):
sda: ios=34726/11858, sectors=277808/94783, merge=0/1, ticks=1895147/1717092, in_queue=3635680, util=89.42%
</code></pre>
</section>
<section id="mt" container>
<h2>fio: max-throughput</h2>
<p>
By utilizing a large <b>1MB block size</b> and a high <b>queue depth of 32</b>, this test measures the maximum sequential throughput of the host’s read-ahead cache.
The 3.5 GB/s result confirms the presence of an <b>Enterprise NVMe tier</b> or a massive RAM-backed cache layer, which provides a high-speed "mirage" that masks the underlying slow physical storage.
</p>
<pre><code>
sunny@vps9:~$ fio --name=max-throughput --rw=read --bs=1m --size=1g --direct=1 --ioengine=libaio --iodepth=32 --numjobs=1 --group_reporting
max-throughput: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
fio-3.39
Starting 1 process
max-throughput: Laying out IO file (1 file / 1024MiB)
max-throughput: (groupid=0, jobs=1): err= 0: pid=6171: Sun Feb 15 17:50:56 2026
read: IOPS=3357, BW=3357MiB/s (3520MB/s)(1024MiB/305msec)
slat (usec): min=5, max=2398, avg=27.87, stdev=117.89
clat (usec): min=1912, max=123656, avg=9276.68, stdev=4764.27
lat (usec): min=1948, max=123668, avg=9304.55, stdev=4760.76
clat percentiles (msec):
| 1.00th=[ 4], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 7],
| 30.00th=[ 8], 40.00th=[ 8], 50.00th=[ 9], 60.00th=[ 10],
| 70.00th=[ 11], 80.00th=[ 12], 90.00th=[ 14], 95.00th=[ 16],
| 99.00th=[ 19], 99.50th=[ 21], 99.90th=[ 22], 99.95th=[ 124],
| 99.99th=[ 124]
lat (msec) : 2=0.10%, 4=1.17%, 10=65.23%, 20=33.01%, 50=0.39%
lat (msec) : 250=0.10%
cpu : usr=0.00%, sys=12.50%, ctx=124, majf=0, minf=29
IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=97.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued rwts: total=1024,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=32
Run status group 0 (all jobs):
READ: bw=3357MiB/s (3520MB/s), 3357MiB/s-3357MiB/s (3520MB/s-3520MB/s), io=1024MiB (1074MB), run=305-305msec
Disk stats (read/write):
sda: ios=982/0, sectors=2011136/0, merge=0/0, ticks=8580/0, in_queue=8580, util=66.50%
</code></pre>
</section>
<section id="sw" container>
<h2>fio: sustained-write</h2>
<p>
This test evaluates <b>synchronous write persistence</b> over a 10GB span with an explicit fsync after every operation.
This bypasses all volatile host-side caching to expose the physical limitation of the persistent storage.
The drop to <b>5 IOPS</b> and a 133-second peak completion latency indicates that the physical disks are either severely over-provisioned or under-performing.
</p>
<pre><code>
sunny@vps9:~$ fio --name=sustained-write --rw=write --bs=1m --size=10g --direct=1 --ioengine=libaio --iodepth=32 --numjobs=1 --fsync=1 --group_reporting
sustained-write: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
fio-3.39
Starting 1 process
sustained-write: Laying out IO file (1 file / 10240MiB)
Jobs: 1 (f=1): [W(1)][99.6%][w=35.0MiB/s][w=35 IOPS][eta 00m:08s]
sustained-write: (groupid=0, jobs=1): err= 0: pid=6176: Sun Feb 15 18:26:04 2026
write: IOPS=5, BW=5870KiB/s (6011kB/s)(10.0GiB/1786259msec); 0 zone resets
slat (usec): min=52, max=2898.6k, avg=1022.82, stdev=41988.34
clat (msec): min=2, max=133798, avg=5362.99, stdev=11622.04
lat (msec): min=2, max=133798, avg=5364.01, stdev=11622.60
clat percentiles (msec):
| 1.00th=[ 218], 5.00th=[ 271], 10.00th=[ 309], 20.00th=[ 376],
| 30.00th=[ 464], 40.00th=[ 726], 50.00th=[ 961], 60.00th=[ 1452],
| 70.00th=[ 2467], 80.00th=[ 6611], 90.00th=[17113], 95.00th=[17113],
| 99.00th=[17113], 99.50th=[17113], 99.90th=[17113], 99.95th=[17113],
| 99.99th=[17113]
bw ( KiB/s): min= 2035, max=145408, per=100.00%, avg=15718.17, stdev=24243.77, samples=1330
iops : min= 1, max= 142, avg=15.33, stdev=23.67, samples=1330
lat (msec) : 4=0.01%, 20=0.01%, 50=0.02%, 100=0.04%, 250=3.30%
lat (msec) : 500=29.33%, 750=7.83%, 1000=11.12%, 2000=15.30%, >=2000=33.04%
fsync/fdatasync/sync_file_range:
sync (msec): min=164, max=48244k, avg=151597.91, stdev=2650367.50
sync percentiles (msec):
| 1.00th=[ 228], 5.00th=[ 284], 10.00th=[ 321], 20.00th=[ 393],
| 30.00th=[ 489], 40.00th=[ 785], 50.00th=[ 1003], 60.00th=[ 1519],
| 70.00th=[ 2702], 80.00th=[ 7013], 90.00th=[17113], 95.00th=[17113],
| 99.00th=[17113], 99.50th=[17113], 99.90th=[17113], 99.95th=[17113],
| 99.99th=[17113]
cpu : usr=0.08%, sys=0.09%, ctx=25444, majf=0, minf=11
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=199.4%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued rwts: total=0,10240,0,10239 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=32
Run status group 0 (all jobs):
WRITE: bw=5870KiB/s (6011kB/s), 5870KiB/s-5870KiB/s (6011kB/s-6011kB/s), io=10.0GiB (10.7GB), run=1786259-1786259msec
Disk stats (read/write):
sda: ios=289/21348, sectors=6321/21011863, merge=2/279, ticks=136/2962585, in_queue=3884173, util=93.52%
</code></pre>
</section>
<footer>
<p>&copy; 2026 yoursunny.com</p>
</footer>
<script>
const $sections = document.querySelectorAll("section");
const $backBtn = document.querySelector(".back-btn");
function handleHashChange() {
const h = location.hash.slice(1) || "home";
for (const $section of $sections) {
$section.style.display = $section.id === h ? "" : "none";
}
$backBtn.style.display = h === "home" ? "none" : "";
setTimeout(() => window.scrollTo(0, 0), 100);
}
window.addEventListener("hashchange", handleHashChange);
handleHashChange();
</script>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment