当前位置: 首页 > news >正文

ceph部署

环境:
OS:Centos 7

 

1.关闭防火墙

# 关闭防火樯
systemctl disable firewalld 
systemctl stop firewalld # 关闭 selinux 
setenforce 0
sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

 

2.SSH配置免密

# 给ceph单节点配置免密
ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys# 权限设置 644
chmod 644 ~/.ssh/authorized_keys

 

3.配置yum源

[root@master yum.repos.d]# more /etc/yum.repos.d/ceph.repo 
[Ceph]
name=Ceph packages for 
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/$basearch
gpgcheck=0
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch
gpgcheck=0
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/SRPMS
gpgcheck=0

 

3.安装ceph-deploy

# yum方式安装
yum install -y ceph-deploy# 检验版本
ceph-deploy --version[root@master yum.repos.d]# ceph-deploy --version
Traceback (most recent call last):File "/usr/bin/ceph-deploy", line 18, in <module>from ceph_deploy.cli import main
ModuleNotFoundError: No module named 'ceph_deploy'解决办法:
我这里安装了python3,需要安装pyhton2
[root@master yum.repos.d]#rm /usr/local/bin/python
[root@master yum.repos.d]#ln -s /usr/bin/python2 /usr/local/bin/python
[root@master yum.repos.d]# python -V
Python 2.7.5再次执行
[root@master yum.repos.d]# ceph-deploy --version
2.0.1

 

4.创建 ceph 集群
这里我们使用单节点部署,需将集群的副本数量设置为1,创建完后修改 ceph.conf 文件
# 创建一个目录保存ceph配置及密钥

 

[root@master yum.repos.d]#mkdir -p /opt/ceph# 创建ceph cluster集群
[root@master yum.repos.d]# cd /opt/ceph 
[root@master ceph]# ceph-deploy new master
这里的master是我这里的主机名称 [root@master ceph]# ceph
-deploy new master [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf [ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy new master [ceph_deploy.cli][INFO ] ceph-deploy options: [ceph_deploy.cli][INFO ] username : None [ceph_deploy.cli][INFO ] func : <function new at 0x7fb2ca8b0398> [ceph_deploy.cli][INFO ] verbose : False [ceph_deploy.cli][INFO ] overwrite_conf : False [ceph_deploy.cli][INFO ] quiet : False [ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fb2ca8dc200> [ceph_deploy.cli][INFO ] cluster : ceph [ceph_deploy.cli][INFO ] ssh_copykey : True [ceph_deploy.cli][INFO ] mon : ['master'] [ceph_deploy.cli][INFO ] public_network : None [ceph_deploy.cli][INFO ] ceph_conf : None [ceph_deploy.cli][INFO ] cluster_network : None [ceph_deploy.cli][INFO ] default_release : False [ceph_deploy.cli][INFO ] fsid : None [ceph_deploy.new][DEBUG ] Creating new cluster named ceph [ceph_deploy.new][INFO ] making sure passwordless SSH succeeds [master][DEBUG ] connected to host: master [master][DEBUG ] detect platform information from remote host [master][DEBUG ] detect machine type [master][DEBUG ] find the location of an executable [master][INFO ] Running command: /usr/sbin/ip link show [master][INFO ] Running command: /usr/sbin/ip addr show [master][DEBUG ] IP addresses found: [u'192.168.1.108', u'10.244.219.64', u'172.17.0.1', u'192.168.122.1', u'192.168.1.103'] [ceph_deploy.new][DEBUG ] Resolving host master [ceph_deploy.new][DEBUG ] Monitor master at 192.168.1.108 [ceph_deploy.new][DEBUG ] Monitor initial members are ['master'] [ceph_deploy.new][DEBUG ] Monitor addrs are ['192.168.1.108'] [ceph_deploy.new][DEBUG ] Creating a random mon key... [ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring... [ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...

 

因为是单节点部署,需将集群的副本数量设置为1,修改ceph.conf文件

[root@master ceph]# cd /opt/ceph
[root@ceph ceph]# echo "osd pool default size = 1" >>  ceph.conf
[root@ceph ceph]# echo "osd pool default min size = 1" >>  ceph.conf

 

最后的配置文件如下:

[root@master ceph]# more ceph.conf 
[global]
fsid = 3af028b9-0f59-4070-bd5d-413316ea81e1
mon_initial_members = master
mon_host = 192.168.1.108
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephxosd pool default size = 1
osd pool default min size = 1

 

5.安装ceph

yum install ceph ceph-mon ceph-mgr ceph-radosgw ceph-mds -y 

 

报错误:
Error: Package: 2:ceph-common-14.2.22-0.el7.x86_64 (Ceph)Requires: liboath.so.0()(64bit)
Error: Package: 2:ceph-mgr-14.2.22-0.el7.x86_64 (Ceph)Requires: python-bcrypt
Error: Package: 2:ceph-common-14.2.22-0.el7.x86_64 (Ceph)Requires: liboath.so.0(LIBOATH_1.2.0)(64bit)
Error: Package: 2:librgw2-14.2.22-0.el7.x86_64 (Ceph)Requires: liboath.so.0()(64bit)
Error: Package: 2:librgw2-14.2.22-0.el7.x86_64 (Ceph)Requires: liblttng-ust.so.0()(64bit)
Error: Package: 2:ceph-base-14.2.22-0.el7.x86_64 (Ceph)Requires: liboath.so.0(LIBOATH_1.12.0)(64bit)
Error: Package: 2:librados2-14.2.22-0.el7.x86_64 (Ceph)Requires: liblttng-ust.so.0()(64bit)

 

解决办法:

yum install epel-release   -y
rpm -Uvh epel-release*rpm
yum install lttng-ust -y

 

6.初始化mon
## 初始化 monitor

[root@master ceph]#ceph-deploy mon create-initial执行完成后会生成如下文件
[root@master ceph]# pwd
/opt/ceph
[root@master ceph]# ls -al
total 44
drwxr-xr-x  2 root root   244 Aug 12 14:28 .
drwxr-xr-x. 8 root root    83 Aug 12 11:30 ..
-rw-------  1 root root   113 Aug 12 14:28 ceph.bootstrap-mds.keyring
-rw-------  1 root root   113 Aug 12 14:28 ceph.bootstrap-mgr.keyring
-rw-------  1 root root   113 Aug 12 14:28 ceph.bootstrap-osd.keyring
-rw-------  1 root root   113 Aug 12 14:28 ceph.bootstrap-rgw.keyring
-rw-------  1 root root   151 Aug 12 14:28 ceph.client.admin.keyring
-rw-r--r--  1 root root   253 Aug 12 11:35 ceph.conf
-rw-r--r--  1 root root 15502 Aug 12 14:28 ceph-deploy-ceph.log
-rw-------  1 root root    73 Aug 12 11:33 ceph.mon.keyring

 

## 把配置文件和密钥拷贝到管理节点和Ceph节点
[root@master ceph]#ceph-deploy admin master

 

## 确保对秘钥环有权限
[root@master ceph]#chmod +r /etc/ceph/ceph.client.admin.keyring

[root@master ceph]#cp /opt/ceph/ceph* /etc/ceph/
[root@master ceph]#chmod +r /etc/ceph/ceph*

# 启动monitor节点后,检查ceph集群

[root@master ceph]# ceph -scluster:id:     3af028b9-0f59-4070-bd5d-413316ea81e1health: HEALTH_WARNmon is allowing insecure global_id reclaimmon master is low on available spaceservices:mon: 1 daemons, quorum master (age 4m)mgr: no daemons activeosd: 0 osds: 0 up, 0 indata:pools:   0 pools, 0 pgsobjects: 0 objects, 0 Busage:   0 B used, 0 B / 0 B availpgs: 

 

出现告警项目
mon is allowing insecure global_id reclaim

解决办法:

[root@master ceph]# ceph config set mon auth_allow_insecure_global_id_reclaim false
[root@master ceph]# ceph -scluster:id:     3af028b9-0f59-4070-bd5d-413316ea81e1health: HEALTH_WARNmon master is low on available spaceservices:mon: 1 daemons, quorum master (age 7m)mgr: no daemons activeosd: 0 osds: 0 up, 0 indata:pools:   0 pools, 0 pgsobjects: 0 objects, 0 Busage:   0 B used, 0 B / 0 B availpgs:   

 

7.部署mgr

在ceph-deploy节点上部署mgr(mgr是用来监控各个节点的)


[root@master ceph]#ceph-deploy mgr create master


使用ceph -s,检查一下mgr的状态已经激活

[root@master ceph]# ceph -scluster:id:     3af028b9-0f59-4070-bd5d-413316ea81e1health: HEALTH_WARNOSD count 0 < osd_pool_default_size 1mon master is low on available spaceservices:mon: 1 daemons, quorum master (age 13m)mgr: master(active, since 44s)osd: 0 osds: 0 up, 0 indata:pools:   0 pools, 0 pgsobjects: 0 objects, 0 Busage:   0 B used, 0 B / 0 B availpgs: 

 

 

8.添加OSD 硬盘
lsblk (查看目前空闲硬盘的名称sdb)

[root@master ~]# lsblk
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0   25G  0 disk 
├─sda1            8:1    0    1G  0 part /boot
└─sda2            8:2    0   24G  0 part ├─centos-root 253:0    0 21.5G  0 lvm  /└─centos-swap 253:1    0  2.5G  0 lvm  
sdb               8:16   0    2G  0 disk 
sr0              11:0    1 1024M  0 rom  

 

[root@master ~]# ceph-deploy osd create master --data /dev/sdb
报错误:
[ceph_deploy][ERROR ] ConfigError: Cannot load config: [Errno 2] No such file or directory: 'ceph.conf'; has `ceph-deploy new` been run in this directory?
解决办法:

登录到ceph目录下执行
[root@master ceph]# cd /opt/ceph
[root@master ceph]# ceph-deploy osd create master --data /dev/sdb

 

#查看集群状态

[root@master ceph]# ceph -scluster:id:     3af028b9-0f59-4070-bd5d-413316ea81e1health: HEALTH_WARNmon master is low on available spaceservices:mon: 1 daemons, quorum master (age 10m)mgr: master(active, since 10m)osd: 1 osds: 1 up (since 2m), 1 in (since 2m)data:pools:   0 pools, 0 pgsobjects: 0 objects, 0 Busage:   1.0 GiB used, 1019 MiB / 2.0 GiB availpgs: 

 

9.通过ceph osd tree 查看osd的列表情况

[root@master ceph]# ceph osd tree
ID CLASS WEIGHT  TYPE NAME       STATUS REWEIGHT PRI-AFF 
-1       0.00189 root default                            
-3       0.00189     host master                         0   hdd 0.00189         osd.0       up  1.00000 1.00000 

 

http://www.aitangshan.cn/news/903.html

相关文章:

  • 学习记录:23ai新特性:Priority Transactions
  • iOS代码混淆工具怎么选 适合小团队的实用指南
  • 2025 08 12
  • Nginx配置:负载均衡
  • 读书笔记:白话Oracle重做与撤销:数据库的后悔药和时光机
  • Java面向对象
  • 智能台灯离线语音控制芯片方案与应用场景
  • Luogu P3287 [SCOI2014] 方伯伯的玉米田 题解 [ 紫 ] [ 多维 DP ] [ 贪心 ] [ 树状数组 ] [ 状态设计优化 ]
  • VSCode添加到右键菜单中
  • css 红包打开静态效果
  • 厂商官网
  • Java基础学习的一些小细节
  • 2025.8.12 java课堂笔记
  • 记录---高效前端开发:使用 unplugin-auto-import 实现依赖自动导入
  • 【IT转码 Day02】
  • 锐捷
  • 思科
  • 华三
  • 竞速之渊
  • 注册 JVM 关闭钩子(Shutdown Hook)的方法
  • 2025.7.28 CSP-S模拟赛28
  • 服务器如何配置防火墙管理端口访问?
  • 【做题记录】数论(马思博)
  • 渗透测试十年回忆录:从漏洞扫描到社会工程的艺术
  • xx-准备工作
  • 月份选择每个月不能重复
  • 基于MATLAB实现的随机森林算法对共享单车签入签出数量进行预测
  • 8 月考试
  • .net MVC4中提示Newtonsoft.Json, Version=4.5.0.0
  • MySQL 并发控制和日志