Brief testing with fio benchmark <pre><verbatim>[global] bs=4k ioengine=libaio iodepth=1 size=2g direct=1 runtime=10 directory=/media/ceph [rand-read] rw=randread stonewall [rand-write] rw=randwrite stonewall [seq-read] rw=read stonewall [seq-write] rw=write stonewall</verbatim> </pre> dlee064@ubuntu-dlee064:~/Dropbox/code/spc1/run$ sudo ./fio fioceph <br />rand-read: (g=0): rw=randread, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=1<br />rand-write: (g=1): rw=randwrite, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=1<br />seq-read: (g=2): rw=read, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=1<br />seq-write: (g=3): rw=write, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=1<br />Starting 4 processes<br />seq-write: Laying out IO file(s) (1 file(s) / 2048MB)<br />Jobs: 1 (f=1): [___W] [57.7% done] [0K/36K /s] [0/9 iops] [eta 00m:30s] <br />rand-read: (groupid=0, jobs=1): err= 0: pid=7327<br /> read : io=2228KB, bw=226561B/s, iops=55, runt= 10070msec<br /> slat (usec): min=334, max=6417K, avg=18076.68, stdev=273238.04<br /> clat (usec): min=0, max=7, avg= 0.48, stdev= 0.59<br /> bw (KB/s) : min= 0, max= 1776, per=274.21%, avg=606.00, stdev=769.73<br /> cpu : usr=0.00%, sys=0.00%, ctx=558, majf=0, minf=25<br /> IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%<br /> submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%<br /> complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%<br /> issued r/w: total=557/0, short=0/0<br /> lat (usec): 2=98.56%, 4=1.26%, 10=0.18%<br /><br />rand-write: (groupid=1, jobs=1): err= 0: pid=7329<br /> write: io=315392B, bw=31180B/s, iops=7, runt= 10115msec<br /> slat (msec): min=84, max=1324, avg=131.35, stdev=144.25<br /> clat (usec): min=0, max=7, avg= 1.19, stdev= 0.86<br /> bw (KB/s) : min= 9, max= 40, per=108.54%, avg=32.56, stdev= 9.14<br /> cpu : usr=0.00%, sys=0.00%, ctx=79, majf=0, minf=23<br /> IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%<br /> submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%<br /> complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%<br /> issued r/w: total=0/77, short=0/0<br /> lat (usec): 2=77.92%, 4=20.78%, 10=1.30%<br /><br />seq-read: (groupid=2, jobs=1): err= 0: pid=7330<br /> read : io=110676KB, bw=11066KB/s, iops=2766, runt= 10001msec<br /> slat (usec): min=292, max=39473, avg=359.95, stdev=289.00<br /> clat (usec): min=0, max=32, avg= 0.45, stdev= 0.60<br /> bw (KB/s) : min= 9984, max=11368, per=100.37%, avg=11107.37, stdev=296.59<br /> cpu : usr=0.50%, sys=1.30%, ctx=35966, majf=0, minf=27<br /> IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%<br /> submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%<br /> complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%<br /> issued r/w: total=27669/0, short=0/0<br /> lat (usec): 2=99.95%, 4=0.03%, 10=0.01%, 20=0.01%, 50=0.01%<br /><br />seq-write: (groupid=3, jobs=1): err= 0: pid=7335<br /> write: io=303104B, bw=30090B/s, iops=7, runt= 10073msec<br /> slat (msec): min=91, max=946, avg=136.11, stdev=110.69<br /> clat (usec): min=0, max=6, avg= 1.32, stdev= 0.89<br /> bw (KB/s) : min= 11, max= 39, per=106.68%, avg=30.94, stdev= 7.40<br /> cpu : usr=0.00%, sys=0.00%, ctx=85, majf=0, minf=27<br /> IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%<br /> submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%<br /> complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%<br /> issued r/w: total=0/74, short=0/0<br /> lat (usec): 2=60.81%, 4=37.84%, 10=1.35%<br /><br /><br />Run status group 0 (all jobs):<br /> READ: io=2228KB, aggrb=221KB/s, minb=226KB/s, maxb=226KB/s, mint=10070msec, maxt=10070msec<br /><br />Run status group 1 (all jobs):<br /> WRITE: io=308KB, aggrb=30KB/s, minb=31KB/s, maxb=31KB/s, mint=10115msec, maxt=10115msec<br /><br />Run status group 2 (all jobs):<br /> READ: io=110676KB, aggrb=11066KB/s, minb=11332KB/s, maxb=11332KB/s, mint=10001msec, maxt=10001msec<br /><br />Run status group 3 (all jobs):<br /> WRITE: io=296KB, aggrb=29KB/s, minb=30KB/s, maxb=30KB/s, mint=10073msec, maxt=10073msec Generally, both the seq / random *write* operations appears to be very poor. -- Main.DongJinLee - 16 Sep 2010
This topic: ORUA
>
StorageImplementation
>
CephTest
Topic revision: r2 - 2010-09-16 - DongJinLee
Copyright © 2008-2025 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki?
Send feedback