Ceph配置文件参考

时间:2022-07-22
本文章向大家介绍Ceph配置文件参考,主要内容包括其使用实例、应用技巧、基本知识点总结和需要注意事项,具有一定的参考价值,需要的朋友可以参考一下。

配置参考

默认创建好集群后,会生成ceph.conf配置文件

ceph-deploy new ceph-admin

配置段落

Ceph 配置文件可用于配置存储集群内的所有守护进程、或者某一类型的所有守护进程。要配置一系列守护进程,这些配置必须位于能收到配置的段落之下,比如:

[global]

<td class="field-body">
  <tt class="docutils literal"><span class="pre">[global]</span></tt> 下的配置影响 Ceph 集群里的所有守护进程。
</td>
<td class="field-body">
  <tt class="docutils literal"><span class="pre">auth</span> <span class="pre">supported</span> <span class="pre">=</span> <span class="pre">cephx</span></tt>
</td>

[osd]

<td class="field-body">
  <tt class="docutils literal"><span class="pre">[osd]</span></tt> 下的配置影响存储集群里的所有 <tt class="docutils literal"><span class="pre">ceph-osd</span></tt> 进程,并且会覆盖 <tt class="docutils literal"><span class="pre">[global]</span></tt> 下的同一选项。
</td>
<td class="field-body">
  <tt class="docutils literal"><span class="pre">osd</span> <span class="pre">journal</span> <span class="pre">size</span> <span class="pre">=</span> <span class="pre">1000</span></tt>
</td>

[mon]

<td class="field-body">
  <tt class="docutils literal"><span class="pre">[mon]</span></tt> 下的配置影响集群里的所有 <tt class="docutils literal"><span class="pre">ceph-mon</span></tt> 进程,并且会覆盖 <tt class="docutils literal"><span class="pre">[global]</span></tt> 下的同一选项。
</td>
<td class="field-body">
  <tt class="docutils literal"><span class="pre">mon</span> <span class="pre">addr</span> <span class="pre">=</span> <span class="pre">10.0.0.101:6789</span></tt>
</td>

[mds]

<td class="field-body">
  <tt class="docutils literal"><span class="pre">[mds]</span></tt> 下的配置影响集群里的所有 <tt class="docutils literal"><span class="pre">ceph-mds</span></tt> 进程,并且会覆盖 <tt class="docutils literal"><span class="pre">[global]</span></tt> 下的同一选项。
</td>
<td class="field-body">
  <tt class="docutils literal"><span class="pre">host</span> <span class="pre">=</span> <span class="pre">myserver01</span></tt>
</td>

[client]

<td class="field-body">
  <tt class="docutils literal"><span class="pre">[client]</span></tt> 下的配置影响所有客户端(如挂载的 Ceph 文件系统、挂载的块设备等等)。
</td>
<td class="field-body">
  <tt class="docutils literal"><span class="pre">log</span> <span class="pre">file</span> <span class="pre">=</span> <span class="pre">/var/log/ceph/radosgw.log</span></tt>
</td>

全局设置影响集群内所有守护进程的例程,所以 [global] 可用于设置适用所有守护进程的选项。但可以用这些覆盖 [global] 设置:

[global]
#Enable authentication between hosts within the cluster.
#v 0.54 and earlier
auth supported = cephx

#v 0.55 and after
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
[osd]
osd journal size = 1000
[osd.1]
# settings affect osd.1 only.

[mon.a]
# settings affect mon.a only.

[mds.b]
# settings affect mds.b only.
[client.radosgw.instance-name]
# settings affect client.radosgw.instance-name only.
  <td class="field-body">
    展开为存储集群名字,在同一套硬件上运行多个集群时有用。
  </td>
</tr>

<tr class="field-even field">
  <th class="field-name">
    实例:
  </th>
  
  <td class="field-body">
    <tt class="docutils literal"><span class="pre">/etc/ceph/$cluster.keyring</span></tt>
  </td>
</tr>

<tr class="field-odd field">
  <th class="field-name">
    默认值:
  </th>
  
  <td class="field-body">
    <tt class="docutils literal"><span class="pre">ceph</span></tt>
  </td>
</tr>
  <td class="field-body">
    可展开为 <tt class="docutils literal"><span class="pre">mds</span></tt> 、 <tt class="docutils literal"><span class="pre">osd</span></tt> 、 <tt class="docutils literal"><span class="pre">mon</span></tt> 中的一个,有赖于当前守护进程的类型。
  </td>
</tr>

<tr class="field-even field">
  <th class="field-name">
    实例:
  </th>
  
  <td class="field-body">
    <tt class="docutils literal"><span class="pre">/var/lib/ceph/$type</span></tt>
  </td>
</tr>
  <td class="field-body">
    展开为守护进程标识符; <tt class="docutils literal"><span class="pre">osd.0</span></tt> 应为 <tt class="docutils literal"><span class="pre"></span></tt> , <tt class="docutils literal"><span class="pre">mds.a</span></tt> 是 <tt class="docutils literal"><span class="pre">a</span></tt> 。
  </td>
</tr>

<tr class="field-even field">
  <th class="field-name">
    实例:
  </th>
  
  <td class="field-body">
    <tt class="docutils literal"><span class="pre">/var/lib/ceph/$type/$cluster-$id</span></tt>
  </td>
</tr>
  <td class="field-body">
    展开为当前守护进程的主机名。
  </td>
</tr>
  <td class="field-body">
    展开为 <tt class="docutils literal"><span class="pre">$type.$id</span></tt> 。
  </td>
</tr>

<tr class="field-even field">
  <th class="field-name">
    实例:
  </th>
  
  <td class="field-body">
    <tt class="docutils literal"><span class="pre">/var/run/ceph/$cluster-$name.asok</span></tt>
  </td>
</tr>
[global]
mon_initial_members = ceph1
mon_host = 10.0.0.1
<p class="last">
  一个 Ceph 集群可以只有一个监视器,但是如果它失败了,因没有监视器数据服务就会中断。
</p>
[mon.a]
host = hostName
mon addr = 150.140.130.120:6789
/var/lib/ceph/mon/$cluster-$id
/var/lib/ceph/mon/ceph-a
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
<p class="last">
  我们建议,升级时先明确地关闭认证,再进行升级。等升级完成后再重新启用认证。
</p>
[osd]
osd journal size = 10000

[osd.0]
host = {hostname} #manual deployments only.
/var/lib/ceph/osd/$cluster-$id
/var/lib/ceph/osd/ceph-0
ssh {osd-host}
sudo mkdir /var/lib/ceph/osd/ceph-{osd-number}
ssh {new-osd-host}
sudo mkfs -t {fstype} /dev/{disk}
sudo mount -o user_xattr /dev/{hdd} /var/lib/ceph/osd/ceph-{osd-number}
[global]
fsid = {cluster-id}
mon initial members = {hostname}[, {hostname}]
mon host = {ip-address}[, {ip-address}]

#All clusters have a front-side public network.
#If you have two NICs, you can configure a back side cluster 
#network for OSD object replication, heart beats, backfilling,
#recovery, etc.
public network = {network}[, {network}]
#cluster network = {network}[, {network}] 

#Clusters require authentication by default.
auth cluster required = cephx
auth service required = cephx
auth client required = cephx

#Choose reasonable numbers for your journals, number of replicas
#and placement groups.
osd journal size = {n}
osd pool default size = {n}  # Write an object n times.
osd pool default min size = {n} # Allow writing n copy in a degraded state.
osd pool default pg num = {n}
osd pool default pgp num = {n}

#Choose a reasonable crush leaf type.
#0 for a 1-node cluster.
#1 for a multi node cluster in a single rack
#2 for a multi node, multi chassis cluster with multiple hosts in a chassis
#3 for a multi node cluster with hosts across racks, etc.
osd crush chooseleaf type = {n}
ceph tell {daemon-type}.{id or *} injectargs --{name} {value} [--{name} {value}]
ceph tell osd.0 injectargs --debug-osd 20 --debug-ms 1
ceph daemon {daemon-type}.{id} config show | less
ceph daemon {daemon-type}.{id} config show | less
<p class="last">
  集群名字里只能包含字母 a-z 和数字 0-9 。
</p>
sudo mkdir /var/lib/ceph/osd/openstack-0
sudo mkdir /var/lib/ceph/mon/openstack-a
<p class="last">
  在一台主机上运行多个监视器时,你得指定不同端口。监视器默认使用 6789 端口,如果它已经被占,其它集群得指定其它端口。
</p>
ceph -c {cluster-name}.conf health
ceph -c openstack.conf health

配置参考

[global]#全局设置
fsid = xxxxxx                                   #集群标识ID 
mon host = 10.0.1.1,10.0.1.2,10.0.1.3            #monitor IP 地址
auth cluster required = cephx                    #集群认证
auth service required = cephx                           #服务认证
auth client required = cephx                            #客户端认证
osd pool default size = 3                             #最小副本数 默认是3
osd pool default min size = 1                           #PG 处于 degraded 状态不影响其 IO 能力,min_size是一个PG能接受IO的最小副本数
public network = 10.0.1.0/24                            #公共网络(monitorIP段) 
cluster network = 10.0.2.0/24                           #集群网络
max open files = 131072                                 #默认0#如果设置了该选项,Ceph会设置系统的max open fds
mon initial members = node1, node2, node3               #初始monitor (由创建monitor命令而定)
##############################################################
[mon]
mon data = /var/lib/ceph/mon/ceph-$id
mon clock drift allowed = 1                             #默认值0.05#monitor间的clock drift
mon osd min down reporters = 13                         #默认值1#向monitor报告down的最小OSD数
mon osd down out interval = 600      #默认值300      #标记一个OSD状态为down和out之前ceph等待的秒数
##############################################################
[osd]
osd data = /var/lib/ceph/osd/ceph-$id
osd journal size = 20000 #默认5120                      #osd journal大小
osd journal = /var/lib/ceph/osd/$cluster-$id/journal #osd journal 位置
osd mkfs type = xfs                                     #格式化系统类型
osd max write size = 512 #默认值90                   #OSD一次可写入的最大值(MB)
osd client message size cap = 2147483648 #默认值100    #客户端允许在内存中的最大数据(bytes)
osd deep scrub stride = 131072 #默认值524288         #在Deep Scrub时候允许读取的字节数(bytes)
osd op threads = 16 #默认值2                         #并发文件系统操作数
osd disk threads = 4 #默认值1                        #OSD密集型操作例如恢复和Scrubbing时的线程
osd map cache size = 1024 #默认值500                 #保留OSD Map的缓存(MB)
osd map cache bl size = 128 #默认值50                #OSD进程在内存中的OSD Map缓存(MB)
osd mount options xfs = "rw,noexec,nodev,noatime,nodiratime,nobarrier" #默认值rw,noatime,inode64  #Ceph OSD xfs Mount选项
osd recovery op priority = 2 #默认值10              #恢复操作优先级,取值1-63,值越高占用资源越高
osd recovery max active = 10 #默认值15              #同一时间内活跃的恢复请求数 
osd max backfills = 4  #默认值10                  #一个OSD允许的最大backfills数
osd min pg log entries = 30000 #默认值3000           #修建PGLog是保留的最大PGLog数
osd max pg log entries = 100000 #默认值10000         #修建PGLog是保留的最大PGLog数
osd mon heartbeat interval = 40 #默认值30            #OSD ping一个monitor的时间间隔(默认30s)
ms dispatch throttle bytes = 1048576000 #默认值 104857600 #等待派遣的最大消息数
objecter inflight ops = 819200 #默认值1024           #客户端流控,允许的最大未发送io请求数,超过阀值会堵塞应用io,为0表示不受限
osd op log threshold = 50 #默认值5                  #一次显示多少操作的log
osd crush chooseleaf type = 0 #默认值为1              #CRUSH规则用到chooseleaf时的bucket的类型
filestore xattr use omap = true                         #默认false#为XATTRS使用object map,EXT4文件系统时使用,XFS或者btrfs也可以使用
filestore min sync interval = 10                        #默认0.1#从日志到数据盘最小同步间隔(seconds)
filestore max sync interval = 15                        #默认5#从日志到数据盘最大同步间隔(seconds)
filestore queue max ops = 25000                        #默认500#数据盘最大接受的操作数
filestore queue max bytes = 1048576000      #默认100   #数据盘一次操作最大字节数(bytes
filestore queue committing max ops = 50000 #默认500     #数据盘能够commit的操作数
filestore queue committing max bytes = 10485760000 #默认100 #数据盘能够commit的最大字节数(bytes)
filestore split multiple = 8 #默认值2                  #前一个子目录分裂成子目录中的文件的最大数量
filestore merge threshold = 40 #默认值10               #前一个子类目录中的文件合并到父类的最小数量
filestore fd cache size = 1024 #默认值128              #对象文件句柄缓存大小
filestore op threads = 32  #默认值2                    #并发文件系统操作数
journal max write bytes = 1073714824 #默认值1048560    #journal一次性写入的最大字节数(bytes)
journal max write entries = 10000 #默认值100         #journal一次性写入的最大记录数
journal queue max ops = 50000  #默认值50            #journal一次性最大在队列中的操作数
journal queue max bytes = 10485760000 #默认值33554432   #journal一次性最大在队列中的字节数(bytes)
##############################################################
[client]
rbd cache = true #默认值 true      #RBD缓存
rbd cache size = 335544320 #默认值33554432           #RBD缓存大小(bytes)
rbd cache max dirty = 134217728 #默认值25165824      #缓存为write-back时允许的最大dirty字节数(bytes),如果为0,使用write-through
rbd cache max dirty age = 30 #默认值1                #在被刷新到存储盘前dirty数据存在缓存的时间(seconds)
rbd cache writethrough until flush = false #默认值true  #该选项是为了兼容linux-2.6.32之前的virtio驱动,避免因为不发送flush请求,数据不回写
              #设置该参数后,librbd会以writethrough的方式执行io,直到收到第一个flush请求,才切换为writeback方式。
rbd cache max dirty object = 2 #默认值0              #最大的Object对象数,默认为0,表示通过rbd cache size计算得到,librbd默认以4MB为单位对磁盘Image进行逻辑切分
      #每个chunk对象抽象为一个Object;librbd中以Object为单位来管理缓存,增大该值可以提升性能
rbd cache target dirty = 235544320 #默认值16777216    #开始执行回写过程的脏数据大小,不能超过 rbd_cache_max_dirty