There are some anomalies in the previous graphs, notably the incongruent results for stripe cache sizes of 768 and 32768. So I ran some more tests with a smoother range of I/O blocksize.
NB : Each variable was tested three times, with all caches synced and flushed before and after. The average of those three tests were used to plot the data points on the graphs.
Caveat : I also reduced the test file size to 650MB (I know, I know, very bad practice to change multiple variables whilst testing).
Results from a basic synthetic benchmark of I/O performance whilst varying the stripe_cache_size tunable under the sysfs tree at
Tests were performed on a QNAP SS-439 NAS :
- Intel Atom N270 (1.6GHz) 82801G
- 2GB RAM
- Linux 18.104.22.168 kernel
- CFQ I/O elevator
- RAID5 (128kb chunksize) – 4* WD10TPVT 1TB drives (4kb physical sectors, aka. Advanced Format)
- EXT3 filesystem (noatime)
Whilst reading and writing in blocks of just 512 bytes, there seems to be no discernible benefit in setting a larger stripe cache size, with read performance dropping marginally as the cache size is increased.
The first interesting results appeared when the blocksize was increased to 4096bytes. Read performance drops off sharply as we increase the cache, though write performance gains a small amount.
At a blocksize of 1MB, our previous findings are reinforced. Read performance decreases significantly once past very low cache sizes, though write performance benefits a small amount by using larger values for the stripe_cache_size.