Difference: CephTool (1 vs. 5)

Revision 52011-09-13 - DongJinLee

Line: 1 to 1
 
META TOPICPARENT name="ExperimentalStorage"

script1

Line: 107 to 106
 #finally wait several seconds before mounting the ceph, etc.
Changed:
<
<
you can then run a series of benchmark test, e.g.,
>
>
you can then execute a series of benchmark test, e.g.,
 
./run ss 1 2
./run ss 1 4
./run ss 1 6
Changed:
<
<
#etc.. make sure for each test between, you drop caches, manipulate directories, dumps, etc
>
>
#etc.. make sure for each test between, drop caches, manipulate directories, copy output dumps, etc
 

META FILEATTACHMENT attachment="makecephconfig" attr="" comment="" date="1315880758" name="makecephconfig" path="makecephconfig" size="712" stream="makecephconfig" tmpFilename="" user="DongJinLee" version="1"

Revision 42011-09-13 - DongJinLee

Line: 1 to 1
 
META TOPICPARENT name="ExperimentalStorage"

script1

Line: 82 to 82
 Notice the order, it will always start osd.0 with ss1, after it reaches the osd.5 (6 osds), it will be ss2.

script2

Changed:
<
<
start and init example is included.
>
>
see the example code of the init and start.

Create your own bash script accepting an input, e.g.,

run "ss 2 6"
Your run script should contain something like this.
 
Deleted:
<
<
Create a new script accepting an argument, e.g.,
run ss 2 6
 
Changed:
<
<
function run1() {
>
>
#!/bin/bash .... #etc

function func1() {

  echo "makecephconfig: $1" ./makecephconfig $1 echo "init: $1"
Line: 94 to 99
  echo "start: $1 "full" ./start $1 "full" }
Added:
>
>
func1 $1

ssh ss3 ceph osd pool set data size 1 ssh ss3 ceph osd pool set metadata size 1

#finally wait several seconds before mounting the ceph, etc.

 
Added:
>
>
you can then run a series of benchmark test, e.g.,

./run ss 1 2
./run ss 1 4
./run ss 1 6
#etc.. make sure for each test between, you drop caches, manipulate directories, dumps, etc  
 
META FILEATTACHMENT attachment="makecephconfig" attr="" comment="" date="1315880758" name="makecephconfig" path="makecephconfig" size="712" stream="makecephconfig" tmpFilename="" user="DongJinLee" version="1"
META FILEATTACHMENT attachment="ceph.ss" attr="" comment="" date="1315880771" name="ceph.ss" path="ceph.ss" size="870" stream="ceph.ss" tmpFilename="" user="DongJinLee" version="1"

Revision 32011-09-13 - DongJinLee

Line: 1 to 1
 
META TOPICPARENT name="ExperimentalStorage"
Changed:
<
<
**script1**
>
>

script1

  makecephconfig is a simple script allowing you to create the ceph.conf in a way easy to test multiple hosts and OSDs.
Line: 81 to 81
  Notice the order, it will always start osd.0 with ss1, after it reaches the osd.5 (6 osds), it will be ss2.
Changed:
<
<
**script2**
>
>

script2

start and init example is included.

Create a new script accepting an argument, e.g.,

run ss 2 6
function run1() {
   echo "makecephconfig: $1"    
   ./makecephconfig $1
   echo "init: $1"
   ./init $1
   echo "start: $1 "full"
   ./start $1 "full"
}
 
META FILEATTACHMENT attachment="makecephconfig" attr="" comment="" date="1315880758" name="makecephconfig" path="makecephconfig" size="712" stream="makecephconfig" tmpFilename="" user="DongJinLee" version="1"
META FILEATTACHMENT attachment="ceph.ss" attr="" comment="" date="1315880771" name="ceph.ss" path="ceph.ss" size="870" stream="ceph.ss" tmpFilename="" user="DongJinLee" version="1"
Added:
>
>
META FILEATTACHMENT attachment="init1" attr="" comment="" date="1315909734" name="init1" path="init1" size="2523" stream="init1" tmpFilename="" user="DongJinLee" version="1"
META FILEATTACHMENT attachment="start1" attr="" comment="" date="1315909745" name="start1" path="start1" size="1458" stream="start1" tmpFilename="" user="DongJinLee" version="1"

Revision 22011-09-13 - DongJinLee

Line: 1 to 1
 
META TOPICPARENT name="ExperimentalStorage"
Added:
>
>
**script1**
 makecephconfig is a simple script allowing you to create the ceph.conf in a way easy to test multiple hosts and OSDs.

  • Allows you to easily change the conf to test 3osds running on 1node (total 3 disks/osds), and then test 3node with 1osd (total 3 disks/osds).
Line: 79 to 81
  Notice the order, it will always start osd.0 with ss1, after it reaches the osd.5 (6 osds), it will be ss2.
Added:
>
>
**script2**
 
META FILEATTACHMENT attachment="makecephconfig" attr="" comment="" date="1315880758" name="makecephconfig" path="makecephconfig" size="712" stream="makecephconfig" tmpFilename="" user="DongJinLee" version="1"
META FILEATTACHMENT attachment="ceph.ss" attr="" comment="" date="1315880771" name="ceph.ss" path="ceph.ss" size="870" stream="ceph.ss" tmpFilename="" user="DongJinLee" version="1"

Revision 12011-09-13 - DongJinLee

Line: 1 to 1
Added:
>
>
META TOPICPARENT name="ExperimentalStorage"
makecephconfig is a simple script allowing you to create the ceph.conf in a way easy to test multiple hosts and OSDs.

  • Allows you to easily change the conf to test 3osds running on 1node (total 3 disks/osds), and then test 3node with 1osd (total 3 disks/osds).
  • Allows you to make many osd/node setups.
  • This script is to speed up the benchmarks for different node configurations.

  • Note that this only changes the OSD at this stage, you can modify the script to handle multiple MDS/MON the similar way.
  • You need to have an initial ceph.xxx (where xxx is your preferred configuration (and used as the osd's hostname), in our example it is ceph.ss, so multiple hosts will be ss1, ss2, ss3...)
  • You need to make sure the underlying /dev/sdX is correctly configured (formatted and mounted), or else Ceph won't obviously find them.

For instance, type:

./makecephconfig ss 3 1

This will make a new ceph.conf from loading ceph. ss, 3 nodes and 1 osd each (total 3 osds)

./makecephconfig ss 3 3
produces 3 hosts and 3 osds each (total 9 osds)

./makecephconfig ss 6 2
produces 6 hosts and 2 osds each (total 12 osds)

./makecephconfig ss 2 6
produces 2 hosts and 6 osds each (total 12 osds)

The last example will produce the following

[osd.0]
        host = ss1
        osd data = /data/osd.11
        osd journal = /data/osd.11/journal
[osd.1]
        host = ss1
        osd data = /data/osd.12
        osd journal = /data/osd.12/journal
[osd.2]
        host = ss1
        osd data = /data/osd.13
        osd journal = /data/osd.13/journal
[osd.3]
        host = ss1
        osd data = /data/osd.14
        osd journal = /data/osd.14/journal
[osd.4]
        host = ss1
        osd data = /data/osd.15
        osd journal = /data/osd.15/journal
[osd.5]
        host = ss1
        osd data = /data/osd.16
        osd journal = /data/osd.16/journal
[osd.6]
        host = ss2
        osd data = /data/osd.21
        osd journal = /data/osd.21/journal
[osd.7]
        host = ss2
        osd data = /data/osd.22
        osd journal = /data/osd.22/journal
[osd.8]
        host = ss2
        osd data = /data/osd.23
        osd journal = /data/osd.23/journal
[osd.9]
        host = ss2
        osd data = /data/osd.24
        osd journal = /data/osd.24/journal
[osd.10]
        host = ss2
        osd data = /data/osd.25
        osd journal = /data/osd.25/journal
[osd.11]
        host = ss2
        osd data = /data/osd.26
        osd journal = /data/osd.26/journal

Notice the order, it will always start osd.0 with ss1, after it reaches the osd.5 (6 osds), it will be ss2.

META FILEATTACHMENT attachment="makecephconfig" attr="" comment="" date="1315880758" name="makecephconfig" path="makecephconfig" size="712" stream="makecephconfig" tmpFilename="" user="DongJinLee" version="1"
META FILEATTACHMENT attachment="ceph.ss" attr="" comment="" date="1315880771" name="ceph.ss" path="ceph.ss" size="870" stream="ceph.ss" tmpFilename="" user="DongJinLee" version="1"
 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2021 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback