Pre-Setup LAB Ceph Storage Configuration:
OS: CentOS7.2 64bits
CPU: 1 Core
RAM: 2GB หรือมากกว่า
NIC: *2 card (Storage และ Internet)
HDD: 20GB หรือมากกว่า
Server IP Address:
172.20.1.30 ldp-ceph
172.20.1.31 ceph01
172.20.1.32 ceph02
172.20.1.33 ceph03
1. ทุกเครื่องให้ทำการ Initial Configuration ดังนี้
กำหนดไฟล์ ้hosts
vi /etc/hosts 172.20.1.50 ldp-ceph ldp-ceph.example.me 172.20.1.51 node01 node01.example.me 172.20.1.52 node02 node02.example.me 172.20.1.53 node03 node03.example.me
เพิ่ม admin user ceph
useradd -d /home/cephuser -m ceph-admin echo -e "ceph-admin\nceph-admin\n" | passwd ceph-admin echo "ceph-admin ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph-admin chmod 0440 /etc/sudoers.d/ceph-admin echo -e 'Defaults:ceph-admin !requiretty\nceph-admin ALL = (root) NOPASSWD:ALL' | tee /etc/sudoers.d/ceph-admin chmod 440 /etc/sudoers.d/ceph-admin
กำหนด Timezone และ NTP Sync time
timedatectl set-timezone Asia/Bangkok yum install -y ntp ntpdate vi /etc/ntp.conf server 1.th.pool.ntp.org iburst server 1.asia.pool.ntp.org iburst server 2.asia.pool.ntp.org iburst systemctl restart ntpd systemctl enable ntpd
Disable firewalld และ SELinux
systemctl disable firewalld systemctl stop firewalld sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
เพิ่ม Repo สำหรับ ceph package
yum -y install centos-release-ceph-hammer epel-release yum-plugin-priorities sed -i -e "s/enabled=1/enabled=1\npriority=1/g" /etc/yum.repos.d/CentOS-Ceph-Hammer.repo
ทำการสร้าง Folder สำหรับ Ceph Storage
[root@node01 ]#mkdir /storage01 [root@node02 ]#mkdir /storage02 [root@node03 ]#mkdir /storage03
2. ทำการ Configuration และ Deploy Ceph Storage บนเครื่อง ldp-ceph
su - ceph-admin [ceph-admin@ldp-ceph ~]$ssh-keygen [ceph-admin@ldp-ceph ~]$vi ~/.ssh/config Host cephdlp Hostname ldp-ceph.example.me User ceph-admin Host ceph01 Hostname node01.example.me User ceph-admin Host ceph02 Hostname node02.example.me User ceph-admin Host ceph03 Hostname node03.example.me User ceph-admin [ceph-admin@ldp-ceph ~]$chmod 600 ~/.ssh/config [ceph-admin@ldp-ceph ~]$ssh-copy-id node01 [ceph-admin@ldp-ceph ~]$ssh-copy-id node02 [ceph-admin@ldp-ceph ~]$ssh-copy-id node03 [ceph-admin@ldp-ceph ~]$sudo yum -y install ceph-deploy [ceph-admin@ldp-ceph ~]$mkdir ceph-cluster [ceph-admin@ldp-ceph ~]$cd ceph-cluster [ceph-admin@ldp-ceph ceph-cluster]$sudo ceph-deploy new node01 [ceph-admin@ldp-ceph ceph-cluster]$vi ./ceph.conf # add to the end osd pool default size = 2 [ceph-admin@ldp-ceph ~]$ceph-deploy install ldp-ceph node01 node02 node03 [ceph-admin@ldp-ceph ~]$ceph-deploy mon create-initial [ceph-admin@ldp-ceph ~]$ceph-deploy osd prepare node01:/storage01 node02:/storage02 node03:/storage03 [ceph-admin@ldp-ceph ~]$ceph-deploy osd activate node01:/storage01 node02:/storage02 node03:/storage03 [ceph-admin@ldp-ceph ~]$ceph-deploy admin ldp-ceph node01 node02 node03 [ceph-admin@ldp-ceph ~]$sudo chmod 644 /etc/ceph/ceph.client.admin.keyring [ceph-admin@ldp-ceph ~]$ceph health health HEALTH_OK [ceph-admin@ldp-ceph ~]$
ทดสอบ Verify Config
[ceph-admin@ldp-ceph ~]# ceph -s cluster 4c19e7bb-4a4d-49c7-bbb5-1ec6b7817fb4 health HEALTH_OK monmap e1: 1 mons at {node01=172.24.1.51:6789/0} election epoch 2, quorum 0 node01 osdmap e19: 3 osds: 3 up, 3 in pgmap v1497: 320 pgs, 3 pools, 0 bytes data, 1 objects 19187 MB used, 34446 MB / 53634 MB avail 320 active+clean [ceph-admin@ldp-ceph ~]#
ทำการ Deploy Ceph Storage เสร็จแล้ว เดี๋ยวจะมาต่อวิธีการ นำไปใช้งาน.........
No comments:
Post a Comment