Due to Ceph fast moving pace, lot of Ceph manual including some articles on Ceph.com, are out of date. Below is what I tested on SL6.6 node, could be a reference to other Linux Distro.

Env:

Kernel: 2.6.32-504.3.3.el6.x86_64
Ceph: 0.87

Adding an osd to Ceph cluster

Display current osd tree

#ceph osd tree
# id    weight    type name    up/down    reweight
-1    1    root default
-2    1        host pool02
0    1            osd.0    up    1    

Only one osd available now for now, adding the second osd to the cluster.

Create the OSD

On Ceph cluster node:

# uuidgen
adfa4a36-e12e-4e11-875b-ceda0ec9a228

Create an osd entry to Ceph cluster

[root@pool02 ceph]# ceph osd create adfa4a36-e12e-4e11-875b-ceda0ec9a228
1

Making and mounting filesystem

On osd node:

[root@pool01 ceph]# mkdir /var/lib/ceph/osd/ceph-1

[root@pool01 ceph]# mkfs -t xfs -f /dev/md5           
meta-data=/dev/md5               isize=256    agcount=32, agsize=137347296 blks
         =                       sectsz=512   attr=2, projid32bit=0
data     =                       bsize=4096   blocks=4395112704, imaxpct=5
         =                       sunit=32     swidth=288 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal log           bsize=4096   blocks=521728, version=2
         =                       sectsz=512   sunit=32 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@pool01 ceph]# mount /dev/md5 /var/lib/ceph/osd/ceph-1

Initialize the OSD data directory

On osd node:

[root@pool01 ceph]# ceph-osd -i 1 --mkfs --mkkey --osd-uuid adfa4a36-e12e-4e11-875b-ceda0ec9a228
2015-03-24 11:20:35.323420 7f772050b800 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
2015-03-24 11:20:36.029568 7f772050b800 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
2015-03-24 11:20:36.040569 7f772050b800 -1 filestore(/var/lib/ceph/osd/ceph-1) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory
2015-03-24 11:20:37.118578 7f772050b800 -1 created object store /var/lib/ceph/osd/ceph-1 journal /var/lib/ceph/osd/ceph-1/journal for osd.1 fsid 3e6b9482-f7b7-4fcc-ba8f-472d8a5fbf24
2015-03-24 11:20:37.118706 7f772050b800 -1 auth: error reading file: /var/lib/ceph/osd/ceph-1/keyring: can't open /var/lib/ceph/osd/ceph-1/keyring: (2) No such file or directory2015-03-24 11:20:37.118933 7f772050b800 -1 created new key in keyring /var/lib/ceph/osd/ceph-1/keyring

Register the OSD authentication key

On osd node:

[root@pool01 ceph]# ceph auth add osd.1 osd 'allow *' mon 'allow profile osd' -i /var/lib/ceph/osd/ceph-1/keyring
added key for osd.1

On your Ceph Node, add the new osd node to the CRUSH map

# ceph osd crush add-bucket pool01 host
Added bucket 'pool01'

On Ceph node, place the new osd node under the root default

# ceph osd crush move pool01 root=default

Add the OSD to the CRUSH map so that it can begin receiving data

 # ceph osd crush add osd.1 1.0 host=zpool01
add item id 1 name 'osd.1' weight 1 at location {host=zpool01} to crush map

start your new OSD

Update /etc/ceph/ceph.conf, add

[osd.1]
    host = pool01

Then start the osd

Note: I started the osd with debug setting:

        debug osd = 1/5
        debug filestore = 1/5
        debug journal = 1
        debug monc = 20/20

# service ceph start osd.1
=== osd.1 ===
libust[29131/29131]: Warning: HOME environment variable not set. Disabling LTTng-UST per-user tracing. (in setup_local_apps() at lttng-ust-comm.c:305)
2015-03-24 11:22:29.863136 7f2e8582f700 10 monclient(hunting): build_initial_monmap
2015-03-24 11:22:29.863525 7f2e8582f700 10 monclient(hunting): init
2015-03-24 11:22:29.863573 7f2e8582f700 10 monclient(hunting): auth_supported 2 method cephx
2015-03-24 11:22:29.864588 7f2e8582f700 10 monclient(hunting): _reopen_session rank -1 name
2015-03-24 11:22:29.864662 7f2e8582f700 10 monclient(hunting): picked mon.zpool02 con 0x7f2e800285b0 addr 206.12.1.171:6789/0
2015-03-24 11:22:29.864716 7f2e8582f700 10 monclient(hunting): _send_mon_message to mon.pool02 at 206.12.1.171:6789/0
2015-03-24 11:22:29.864736 7f2e8582f700 10 monclient(hunting): renew_subs
2015-03-24 11:22:29.864741 7f2e8582f700 10 monclient(hunting): authenticate will time out at 2015-03-24 11:27:29.864740
2015-03-24 11:22:29.866556 7f2e7ebfd700 10 monclient(hunting): handle_monmap mon_map magic: 0 v1
2015-03-24 11:22:29.866588 7f2e7ebfd700 10 monclient(hunting):  got monmap 1, mon.zpool02 is now rank 0
2015-03-24 11:22:29.866594 7f2e7ebfd700 10 monclient(hunting): dump:
epoch 1
fsid 3e6b9482-f7b7-4fcc-ba8f-472d8a5fbf24
last_changed 2015-03-17 11:18:18.727022
created 2015-03-17 11:18:18.727022
0: 206.12.1.171:6789/0 mon.pool02

2015-03-24 11:22:29.866757 7f2e7ebfd700 10 monclient(hunting): my global_id is 5016
2015-03-24 11:22:29.867057 7f2e7ebfd700 10 monclient(hunting): _send_mon_message to mon.pool02 at 206.12.1.171:6789/0
2015-03-24 11:22:29.868336 7f2e7ebfd700 10 monclient(hunting): _send_mon_message to mon.pool02 at 206.12.1.171:6789/0
2015-03-24 11:22:29.869530 7f2e7ebfd700  1 monclient(hunting): found mon.pool02
2015-03-24 11:22:29.869543 7f2e7ebfd700 10 monclient: _send_mon_message to mon.pool02 at 206.12.1.171:6789/0
2015-03-24 11:22:29.869601 7f2e7ebfd700 10 monclient: _check_auth_rotating renewing rotating keys (they expired before 2015-03-24 11:21:59.869597)
2015-03-24 11:22:29.869625 7f2e7ebfd700 10 monclient: _send_mon_message to mon.pool02 at 206.12.1.171:6789/0
2015-03-24 11:22:29.869675 7f2e8582f700  5 monclient: authenticate success, global_id 5016
2015-03-24 11:22:29.869709 7f2e8582f700 10 monclient: renew_subs
2015-03-24 11:22:29.869714 7f2e8582f700 10 monclient: _send_mon_message to mon.pool02 at 206.12.1.171:6789/0
2015-03-24 11:22:29.869896 7f2e8582f700 10 monclient: renew_subs
2015-03-24 11:22:29.869933 7f2e8582f700 10 monclient: _send_mon_message to mon.pool02 at 206.12.1.171:6789/0
2015-03-24 11:22:29.870198 7f2e7ebfd700 10 monclient: handle_monmap mon_map magic: 0 v1
2015-03-24 11:22:29.870224 7f2e7ebfd700 10 monclient:  got monmap 1, mon.pool02 is now rank 0
2015-03-24 11:22:29.870231 7f2e7ebfd700 10 monclient: dump:
epoch 1
fsid 3e6b9482-f7b7-4fcc-ba8f-472d8a5fbf24
last_changed 2015-03-17 11:18:18.727022
created 2015-03-17 11:18:18.727022
0: 206.12.1.171:6789/0 mon.zpool02

2015-03-24 11:22:29.870334 7f2e7ebfd700 10 monclient: handle_subscribe_ack sent 2015-03-24 11:22:29.864738 renew after 2015-03-24 11:24:59.864738
2015-03-24 11:22:29.870586 7f2e7ebfd700 10 monclient: _check_auth_rotating have uptodate secrets (they expire after 2015-03-24 11:21:59.870584)
2015-03-24 11:22:29.870954 7f2e7ebfd700 10 monclient: handle_subscribe_ack sent 0.000000, ignoring
2015-03-24 11:22:29.871031 7f2e7ebfd700 10 monclient: handle_subscribe_ack sent 0.000000, ignoring
2015-03-24 11:22:29.876126 7f2e8582f700 10 monclient: _send_command 1 [{"prefix": "get_command_descriptions"}]
2015-03-24 11:22:29.876157 7f2e8582f700 10 monclient: _send_mon_message to mon.pool02 at 206.12.1.171:6789/0
2015-03-24 11:22:29.883230 7f2e7ebfd700 10 monclient: handle_mon_command_ack 1 [{"prefix": "get_command_descriptions"}]
2015-03-24 11:22:29.883239 7f2e7ebfd700 10 monclient: _finish_command 1 = 0
2015-03-24 11:22:30.033778 7f2e8582f700 10 monclient: _send_command 2 [{"prefix": "osd crush create-or-move", "args": ["host=pool01", "root=default"], "id": 1, "weight": 16.370000000000001}]
2015-03-24 11:22:30.033809 7f2e8582f700 10 monclient: _send_mon_message to mon.pool02 at 206.12.1.171:6789/0
2015-03-24 11:22:30.035242 7f2e7ebfd700 10 monclient: handle_mon_command_ack 2 [{"prefix": "osd crush create-or-move", "args": ["host=pool01", "root=default"], "id": 1, "weight": 16.370000000000001}]
2015-03-24 11:22:30.035247 7f2e7ebfd700 10 monclient: _finish_command 2 = 0 create-or-move updated item name 'osd.1' weight 16.37 at location {host=pool01,root=default} to crush map
create-or-move updated item name 'osd.1' weight 16.37 at location {host=pool01,root=default} to crush map
2015-03-24 11:22:30.045873 7f2e8582f700 10 monclient: shutdown
Starting Ceph osd.1 on zpool02...
libust[29179/29179]: Warning: HOME environment variable not set. Disabling LTTng-UST per-user tracing. (in setup_local_apps() at lttng-ust-comm.c:305)
starting osd.1 at :/0 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal

Check osd tree again

[root@zpool02 ceph]# ceph osd tree
# id    weight    type name    up/down    reweight
-1    2    root default
-2    2        host pool02
0    1            osd.0    up    1   
-3    0        host pool01
1    1            osd.1    up    1    

Remove an osd out of Ceph cluster

In example below, remove the new osd.1 from the ceph cluster

Take the osd out of the Ceph cluster

#ceph osd out osd.1

Watch osd data rebalance get done

#ceph -w

Stop the osd

#Service ceph stop osd.1

Remove osd cursh mapping

#ceph osd crush remove osd.1

Remove the OSD authentication key

#ceph auth del osd.1

Remove the OSD.

#ceph osd rm 1

Remove the cluster osd configuration

On the cluster node which has master copy of ceph.conf
Remove the following lines
[osd.1]
    host = pool01

 

 

Comments powered by CComment