Jump to content

Obsolete:Thumper benchmarks

From Wikitech
This page contains historical information. It may be outdated or unreliable.

I ran some testing on the new thumpers using filebench. I tested Solaris 10 Update 5, all current patches installed, with ZFS. The machines have 48 250GB disks, configured as two root disks, 44 data disks, and two hotspares (unused in this test). It has 2 dual-core 2.8GHz Opterons and 16GB RAM.

I first tested with a workload designed to simulate the images workload on amane. This started 2 "uploader" threads; these deleted one file, created two new files (of a random size up to 1MB), then slept for 2 seconds. Concurrently, it started 100 "webservers" threads; these opened a random file, read its entire contents in 64KB blocks, then closed it.

I prepared the benchmark by creating 25,000 files in a filesystem, totalling about 25GB. Notice this size exceeds the system's RAM.

Over a 1200-second run, the system achieved 1MB/sec write performance (expected given the low workload) and 726MB/sec read performance. On average, 725 entire file reads per second was achieved (this suggests that file size did not greatly affect the performance; probably caused by ZFS's aggressive read-ahead).

Next I ran the same benchmark with 20 creator threads (that is, 10 times the write workload) and ran for 1200 seconds again. This produced 10MB/sec writes and 900MB/sec reads (possibly because of caching? need to re-run this one.)

Then I re-ran both benchmarks with an fsync() after every file write (this is a more accurate simulation of NFS, which requires synchronous writes). This produced no apparent performance decrease in either benchmark, although significantly more disk writing was noticed in 'zpool iostat'.

I ran the benchmark again while taking a 'zfs snapshot' every 30 seconds (to simulate filesystem replication to another host). I tested with the snapshots only, and with an incremental 'zfs send' after each snapshot. This seemed to have no impact on performance.

I ran the same benchmark over NFS, with filebench running on ms2. This achieved 121MB/sec (close to the GE limit) and 120 operations/sec. It would be interesting to try this with smaller files.

I changed the test to use <= 64KB files instead of <= 1MB. Although the throughput went down (478MB/sec), file operations (open, read, close) went up to 7685/sec. Over NFS, this gave 52MB/sec, and 2.500 file ops/sec.

Comparison of RAID 10 and RAID Z

Test setup

RAID 10
22 pairs of mirrored disks concatenated to one volume. Two hot spares
RAID Z
9 RAID Z sets of 5 disks concatenated to one volume. One hot spare

Workload script

set $dir=/export/filebench
set $nfiles=500000
set $meandirwidth=1000
set $filesize=128k
set $nthreads=200
set $readiosize=32k
set $ncreators=8

define fileset name=bigfileset,path=$dir,size=$filesize,entries=$nfiles,dirwidth=$meandirwidth,prealloc=50,reuse

define process name=images,instances=1
{
  thread name=uploads,memsize=10m,instances=$ncreators
  {
    flowop deletefile name=deletefile1,filesetname=bigfileset

    flowop createfile name=createfile1,filesetname=bigfileset,fd=1
    flowop appendfilerand name=appendfilerand1,iosize=$filesize,fd=1
    flowop fsync name=fsync1,fd=1
    flowop closefile name=closefile1,fd=1

    flowop createfile name=createfile2,filesetname=bigfileset,fd=1
    flowop appendfilerand name=appendfilerand2,iosize=$filesize,fd=1
    flowop fsync name=fsync2,fd=1
    flowop closefile name=closefile2,fd=1

    flowop delay name=wait,value=2
  }
  thread name=webserver,memsize=10m,instances=$nthreads
  {
    flowop openfile name=openfile2,filesetname=bigfileset,opennext,fd=1
    flowop readwholefile name=readfile2,fd=1,iosize=$readiosize
    flowop closefile name=closefile2,fd=1

    flowop opslimit name=limit
  }
}

RAID 10 Results

limit                       0ops/s   0.0mb/s      0.0ms/op        0us/op-cpu
closefile2              21698ops/s   0.0mb/s      0.0ms/op        5us/op-cpu
readfile2               21695ops/s 2709.8mb/s      4.0ms/op      126us/op-cpu
openfile2               21695ops/s   0.0mb/s      3.0ms/op       36us/op-cpu
wait                        3ops/s   0.0mb/s   2108.9ms/op       26us/op-cpu
closefile2                  0ops/s   0.0mb/s      0.0ms/op        0us/op-cpu
fsync2                      3ops/s   0.0mb/s    168.9ms/op      217us/op-cpu
appendfilerand2             3ops/s   0.2mb/s      1.4ms/op      147us/op-cpu
createfile2                 3ops/s   0.0mb/s     23.0ms/op      162us/op-cpu
closefile1                  3ops/s   0.0mb/s      0.3ms/op        6us/op-cpu
fsync1                      3ops/s   0.0mb/s    214.2ms/op      176us/op-cpu
appendfilerand1             3ops/s   0.2mb/s      0.6ms/op      140us/op-cpu
createfile1                 3ops/s   0.0mb/s      2.5ms/op      129us/op-cpu
deletefile1                 3ops/s   0.0mb/s      1.7ms/op      112us/op-cpu

15064: 3527.288:
IO Summary:      39458334 ops 65111.8 ops/s, (21695/6 r/w) 2710.2mb/s,    173us cpu/op,   7.0ms latency
15064: 3527.288: Shutting down processes

RAID Z Results

22288: 3300.576: Per-Operation Breakdown
limit                       0ops/s   0.0mb/s      0.0ms/op        0us/op-cpu
closefile2              22927ops/s   0.0mb/s      0.0ms/op        4us/op-cpu
readfile2               22924ops/s 2860.0mb/s      3.7ms/op      122us/op-cpu
openfile2               22924ops/s   0.0mb/s      2.5ms/op       36us/op-cpu
wait                        3ops/s   0.0mb/s   2088.8ms/op       26us/op-cpu
closefile2                  0ops/s   0.0mb/s      0.0ms/op        0us/op-cpu
fsync2                      3ops/s   0.0mb/s    191.0ms/op      325us/op-cpu
appendfilerand2             3ops/s   0.2mb/s      1.3ms/op      141us/op-cpu
createfile2                 3ops/s   0.0mb/s     15.9ms/op      154us/op-cpu
closefile1                  3ops/s   0.0mb/s      0.0ms/op        5us/op-cpu
fsync1                      3ops/s   0.0mb/s    213.8ms/op      278us/op-cpu
appendfilerand1             3ops/s   0.2mb/s      0.7ms/op      137us/op-cpu
createfile1                 3ops/s   0.0mb/s      2.3ms/op      123us/op-cpu
deletefile1                 3ops/s   0.0mb/s      2.1ms/op      125us/op-cpu

22288: 3300.576:
IO Summary:      41692903 ops 68800.3 ops/s, (22924/6 r/w) 2860.4mb/s,    165us cpu/op,   6.4ms latency

Summary

RAIDZ seems to be a little bit faster than RAID10. With RAID10, 4.9 TB of diskspace are available, with RAIDZ there are 8 TB of diskspace available. All in all, RAIDZ seems to be preferable.