script1

makecephconfig is a simple script allowing you to create the ceph.conf in a way easy to test multiple hosts and OSDs.

  • Allows you to easily change the conf to test 3osds running on 1node (total 3 disks/osds), and then test 3node with 1osd (total 3 disks/osds).
  • Allows you to make many osd/node setups.
  • This script is to speed up the benchmarks for different node configurations.

  • Note that this only changes the OSD at this stage, you can modify the script to handle multiple MDS/MON the similar way.
  • You need to have an initial ceph.xxx (where xxx is your preferred configuration (and used as the osd's hostname), in our example it is ceph.ss, so multiple hosts will be ss1, ss2, ss3...)
  • You need to make sure the underlying /dev/sdX is correctly configured (formatted and mounted), or else Ceph won't obviously find them.

For instance, type:

./makecephconfig ss 3 1

This will make a new ceph.conf from loading ceph. ss, 3 nodes and 1 osd each (total 3 osds)

./makecephconfig ss 3 3
produces 3 hosts and 3 osds each (total 9 osds)

./makecephconfig ss 6 2
produces 6 hosts and 2 osds each (total 12 osds)

./makecephconfig ss 2 6
produces 2 hosts and 6 osds each (total 12 osds)

The last example will produce the following

[osd.0]
        host = ss1
        osd data = /data/osd.11
        osd journal = /data/osd.11/journal
[osd.1]
        host = ss1
        osd data = /data/osd.12
        osd journal = /data/osd.12/journal
[osd.2]
        host = ss1
        osd data = /data/osd.13
        osd journal = /data/osd.13/journal
[osd.3]
        host = ss1
        osd data = /data/osd.14
        osd journal = /data/osd.14/journal
[osd.4]
        host = ss1
        osd data = /data/osd.15
        osd journal = /data/osd.15/journal
[osd.5]
        host = ss1
        osd data = /data/osd.16
        osd journal = /data/osd.16/journal
[osd.6]
        host = ss2
        osd data = /data/osd.21
        osd journal = /data/osd.21/journal
[osd.7]
        host = ss2
        osd data = /data/osd.22
        osd journal = /data/osd.22/journal
[osd.8]
        host = ss2
        osd data = /data/osd.23
        osd journal = /data/osd.23/journal
[osd.9]
        host = ss2
        osd data = /data/osd.24
        osd journal = /data/osd.24/journal
[osd.10]
        host = ss2
        osd data = /data/osd.25
        osd journal = /data/osd.25/journal
[osd.11]
        host = ss2
        osd data = /data/osd.26
        osd journal = /data/osd.26/journal

Notice the order, it will always start osd.0 with ss1, after it reaches the osd.5 (6 osds), it will be ss2.

script2

see the example code of the init and start.

Create your own bash script accepting an input, e.g.,

run "ss 2 6"
Your run script should contain something like this.

#!/bin/bash
.... #etc

function func1() {
   echo "makecephconfig: $1"    
   ./makecephconfig $1
   echo "init: $1"
   ./init $1
   echo "start: $1 "full"
   ./start $1 "full"
}
func1 $1

ssh ss3 ceph osd pool set data size 1
ssh ss3 ceph osd pool set metadata size 1
#finally wait several seconds before mounting the ceph, etc.

you can then execute a series of benchmark test, e.g.,

./run ss 1 2
./run ss 1 4
./run ss 1 6
#etc.. make sure for each test between, drop caches, manipulate directories, copy output dumps, etc  
Topic attachments
I Attachment History Action Size Date Who Comment
Unknown file formatss ceph.ss r1 manage 0.8 K 2011-09-13 - 02:26 DongJinLee  
Unknown file formatext init1 r1 manage 2.5 K 2011-09-13 - 10:28 DongJinLee  
Unknown file formatext makecephconfig r1 manage 0.7 K 2011-09-13 - 02:25 DongJinLee  
Unknown file formatext start1 r1 manage 1.4 K 2011-09-13 - 10:29 DongJinLee  
Edit | Attach | Watch | Print version | History: r5 < r4 < r3 < r2 < r1 | Backlinks | Raw View | Raw edit | More topic actions
Topic revision: r5 - 2011-09-13 - DongJinLee
 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback