Archive

Archive for June, 2010

I/O performance and stripe_cache_size, contd.

June 8, 2010 1 comment

There are some anomalies in the previous graphs, notably the incongruent results for stripe cache sizes of 768 and 32768. So I ran some more tests with a smoother range of  I/O blocksize.

NB : Each variable was tested three times, with all caches synced and flushed before and after. The average of those three tests were used to plot the data points on the graphs.

Caveat : I also reduced the test file size to 650MB (I know, I know, very bad practice to change multiple variables whilst testing).

I/O performance and stripe_cache_size

June 6, 2010 2 comments

Results from a basic synthetic benchmark of I/O performance whilst varying the stripe_cache_size tunable under the sysfs tree at /sys/block/md0/md/stripe_cache_size

Tests were performed on a QNAP SS-439 NAS  :

  • Intel Atom N270 (1.6GHz) 82801G
  • 2GB RAM
  • Linux 2.6.30.6 kernel
  • CFQ I/O elevator
  • RAID5  (128kb chunksize) – 4* WD10TPVT 1TB drives (4kb physical sectors, aka. Advanced Format)
  • EXT3 filesystem (noatime)

Whilst reading and writing in blocks of just 512 bytes, there seems to be no discernible benefit in setting a larger stripe cache size, with read performance dropping marginally as the cache size is increased.

The first interesting results appeared when the blocksize was increased to 4096bytes. Read performance drops off sharply as we increase the cache, though write performance gains a small amount.

At a blocksize of 1MB, our previous findings are reinforced. Read performance decreases significantly once past very low cache sizes, though write performance benefits a small amount by using larger values for the stripe_cache_size.

WD Passport Essential SE 1TB – Advanced format?

June 2, 2010 3 comments

Having been using this WD Passport Essential SE 1TB USB removable drive (formatted as ext3) to backup my NAS for the last few months, I’ve been consistently underwhelmed by it’s performance. So I tried aligning the sectors to a 4kb boundary on the off chance it behaved like the WD10TPVT drives in use within the NAS itself, and I’m seeing write performance increases of such magnitude I’m left thinking the drive within (a WD10TMVV model) has 4kb physical sectors as well, Western Digitals so called ‘Advanced Format’.

A very simple benchmark of I/O write performance with non-4kb-aligned-sectors:

# fdisk -lu /dev/sds

Disk /dev/sds: 999.5 GB, 999501594624 bytes
255 heads, 63 sectors/track, 121515 cylinders, total 1952151552 sectors
Units = sectors of 1 * 512 = 512 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sds1   *          63  1952138474   976069206   83  Linux
# time dd if=/dev/zero of=zero count=1000 bs=1M
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 65.5702 s, 16.0 MB/s

real    1m5.869s
user    0m0.015s
sys     0m8.167s

And again with 4kb-aligned-sectors:

# fdisk -lu /dev/sds

Disk /dev/sds: 999.5 GB, 999501594624 bytes
255 heads, 63 sectors/track, 121515 cylinders, total 1952151552 sectors
Units = sectors of 1 * 512 = 512 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sds1   *           8  1952151551   976075772   83  Linux
# time dd if=/dev/zero of=./zero count=1000 bs=1M
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 36.2553 s, 28.9 MB/s

real    0m36.577s
user    0m0.009s
sys     0m7.906s

A near 100% improvement in write performance!