GlusterFS 介绍及使用

时间:2019-02-28
本文章向大家介绍GlusterFS 介绍及使用,主要包括GlusterFS 介绍及使用使用实例、应用技巧、基本知识点总结和需要注意事项,具有一定的参考价值,需要的朋友可以参考一下。

GlusterFS 配置及使用
GlusterFS集群创建

一、简介
 GlusterFS概述

Glusterfs是一个开源的分布式文件系统,是Scale存储的核心,能够处理千数量级的客户端.在传统的解决 方案中Glusterfs能够灵活的结合物理的,虚拟的和云资源去体现高可用和企业级的性能存储.
Glusterfs通过TCP/IP或InfiniBand RDMA网络链接将客户端的存储资块源聚集在一起,使用单一的全局命名空间来管理数据,磁盘和内存资源.
Glusterfs基于堆叠的用户空间设计,可以为不同的工作负载提供高优的性能.
Glusterfs支持运行在任何标准IP网络上标准应用程序的标准客户端,如下图1所示,用户可以在全局统一的命名空间中使用NFS/CIFS等标准协议来访问应用数据.
Glusterfs主要特征

扩展性和高性能
高可用
全局统一命名空间
弹性hash算法
弹性卷管理
基于标准协议
工作原理:

 

 

 

 

1) 首先是在客户端, 用户通过glusterfs的mount point 来读写数据, 对于用户来说,集群系统的存在对用户是完全透明的,用户感觉不到是操作本地系统还是远端的集群系统。
2) 用户的这个操作被递交给 本地linux系统的VFS来处理。
3) VFS 将数据递交给FUSE 内核文件系统:在启动 glusterfs 客户端以前,需要想系统注册一个实际的文件系统FUSE,如上图所示,该文件系统与ext3在同一个层次上面, ext3 是对实际的磁盘进行处理, 而fuse 文件系统则是将数据通过/dev/fuse 这个设备文件递交给了glusterfs client端。所以, 我们可以将 fuse文件系统理解为一个代理。
4) 数据被fuse 递交给Glusterfs client 后, client 对数据进行一些指定的处理(所谓的指定,是按照client 配置文件据来进行的一系列处理, 我们在启动glusterfs client 时需要指定这个文件。
5) 在glusterfs client的处理末端,通过网络将数据递交给 Glusterfs Server,并且将数据写入到服务器所控制的存储设备上。

 

 常用卷类型

分布(distributed)

复制(replicate)

条带(striped)

基本卷:

distribute volume:分布式卷

 

在分布式卷文件被随机地分布在整个砖的体积。使用分布式卷,你需要扩展存储,冗余是重要或提供其他硬件/软件层。(簡介:分布式卷,文件通过hash算法随机的分布到由bricks组成的卷上。卷中资源仅在一台服务器上存储,在存储池中非镜像或条带模式。)

replica volume:复制卷

复制卷创建跨多个砖的体积的文件的副本。您可以使用复制卷在环境中的高可用性和高可靠性是至关重要的。(簡介:复制式卷,类似raid1,replica数必须等于volume中brick所包含的存储服务器数,可用性高。创建一个两两互为备份的卷,存储池中一块硬盘损坏,不会影响到数据的使用,最少需要两台服务器才能创建分布镜像卷。)

 

stripe volume:条带卷

条带卷条纹砖之间的数据的容量。为了达到最佳效果,你应该使用条带卷,只有在高并发环境下,访问非常大的文件。(簡介:条带式卷,类似与raid0,stripe数必须等于volume中brick所包含的存储服务器数,文件被分成数据块,以Round Robin的方式存储在bricks中,并发粒度是数据块,大文件性能好。)

 

复合卷:

distribute stripe volume:分布式条带卷

分布式条带卷条带文件在集群中的两个或两个以上的节点。为了达到最佳效果,你应该使用分布式条带卷的要求是扩展存储和高访问非常大的文件的并发环境是至关重要的。(簡介:分布式的条带卷,volume中brick所包含的存储服务器数必须是stripe的倍数(>=2倍),兼顾分布式和条带式的功能。每个文件分布在四台共享服务器上,通常用于大文件访问处理,最少需要 4 台服务器才能创建分布条带卷。)

 distribute replica volume:分布式复制卷

分配文件在复制砖的体积。您可以使用分布式复制卷要求规模的环境中存储和高可靠性是至关重要的。分布复制卷也提供了更好的读取性能在大多数环境

stripe replica volume:条带复制卷

条带復制卷条带数据在复制集群中的砖。为了达到最佳效果,你应该使用条纹复制卷在高并发环境下并行访问非常大的文件和性能是至关重要的。在此版本中,这种类型的卷配置仅支持地图减少工作量。

distribute stripe replicavolume:分布式条带复制卷

分布式条带复制卷分布条带数据在复制砖集群。为了获得最佳效果,你应该使用分布在高并发的条带复制卷环境下并行访问非常大的文件和性能是至关重要的。在此版本中,这种类型的卷配置仅支持地图减少工作量。

 

 

 

二、环境规划

 

操作系统

IP地址

主机名

CentOS 7.2

192.168.10.101

linux-node1.server.com

CentOS 7.2

192.168.10.102

linux-node2.server.com

CentOS 7.2

192.168.10.103

linux-node3.server.com

CentOS 7.2

192.168.10.105

linux-node5.server.com

 

 

环境准备(四台测试机上操作):
 

关闭防火墙,seLinux,同步时间

关闭防火墙

 systemctl stop firewalld

 systemctl disable firewalld

关闭SELinux

 sed 's/=permissive/=disabled/' /etc/selinux/config

 setenforce 0

同步时间

yum -y install wget net-tools ntp ntpdate lrzsz

systemctl restart ntpdate.service ntpd.service && systemctl enable ntpd.service ntpdate.service

 

 

 

2、配置主机映射/etc/hosts(四台测试机上操作):

echo 192.168.10.101  linux-node1.server.com  >> /etc/hosts

echo 192.168.10.102  linux-node2.server.com  >> /etc/hosts

echo 192.168.10.103  linux-node3.server.com  >> /etc/hosts

echo 192.168.10.104  linux-node4.server.com  >> /etc/hosts

 

 

3、安装epel yum源(在node1、node2、node3上操作):

 

yum -y install https://mirrors.ustc.edu.cn/epel//7/x86_64/Packages/e/epel-release-7-11.noarch.rpm

 

 

4、安装GlusterFS(CentOS7 安装 glusterfs 非常的简单,在node1、node2、node3上操作):

yum install centos-release-gluster

yum install -y glusterfs glusterfs-server glusterfs-fuse glusterfs-rdma

 

 

5、启动GlusterFS(在node1、node2、node3上操作):

systemctl start glusterd

systemctl enable glusterd

netstat -tunlp | grep glusterd

tcp        0      0 0.0.0.0:24007           0.0.0.0:*               LISTEN      11693/glusterd  

 

 

 

 

三、Gluster管理(官网:https://gluster.readthedocs.io/en/latest/Quick-Start-Guide/Architecture/):

 

 

1、创建信任关系(也就是集群),glusterfs集群是对等的,没有master和slave概念(在node1上操作即可)。

[root@linux-node1 ~]# gluster peer help

peer detach { <HOSTNAME> | <IP-address> } [force] - detach peer specified by <HOSTNAME>

peer help - display help for peer commands

peer probe { <HOSTNAME> | <IP-address> } - probe peer specified by <HOSTNAME>

peer status - list status of peers

pool list - list all the nodes in the pool (including localhost)

 

 

[root@linux-node1 ~]# gluster peer probe 192.168.10.102

peer probe: success.

[root@linux-node1 ~]# gluster peer probe 192.168.10.103

peer probe: success.

 

 

2、查看集群状态(在node1、node2、node3任何一台上操作都可以):

[root@linux-node1 ~]# gluster peer status

Number of Peers: 2

Hostname: 192.168.10.102

Uuid: 5347d707-e1fd-4988-b457-42919a269d98

State: Peer in Cluster (Connected)

Hostname: 192.168.10.103

Uuid: 05d3847b-f159-4960-be80-197991fef587

State: Peer in Cluster (Connected)

 

 

 

3、创建分布式卷:

 

3.1、创建数据存储目录(在node1、node2、node3上操作):

[root@linux-node1 ~]# mkdir -p /opt/gluster/exp1

[root@linux-node2 ~]#  mkdir -p /opt/gluster/exp2

[root@linux-node3 ~]#  mkdir -p /opt/gluster/exp3

 

 

3.2、创建分布式卷(在node1上操作即可):

[root@linux-node1 ~]# gluster volume create test-volume 192.168.10.101:/opt/gluster/exp1 192.168.10.102:/opt/gluster/exp2 192.168.10.103:/opt/gluster/exp3 force

volume create: test-volume: success: please start the volume to access data

 

3.3、查看卷的状态(在node1上操作即可):

[root@linux-node1 ~]# gluster volume info

Volume Name: test-volume

Type: Distribute

Volume ID: e2248fcf-a37c-44af-92ce-9e31a3a98764

Status: Created

Snapshot Count: 0

Number of Bricks: 3

Transport-type: tcp

Bricks:

Brick1: 192.168.10.101:/opt/gluster/exp1

Brick2: 192.168.10.102:/opt/gluster/exp2

Brick3: 192.168.10.103:/opt/gluster/exp3

Options Reconfigured:

transport.address-family: inet

nfs.disable: on

 

 

 

 

4、创建复制卷:

 

4.1、创建数据存储目录(在node1、node2、node3上操作):

[root@linux-node1 ~]# mkdir /opt/gluster/exp4

[root@linux-node2 ~]# mkdir /opt/gluster/exp5

[root@linux-node3 ~]# mkdir /opt/gluster/exp6

 

 

4.2、创建复制卷(在node1上操作即可):

[root@linux-node1 ~]# gluster volume create repl-volume replica 3 transport tcp 192.168.10.101:/opt/gluster/exp4 192.168.10.102:/opt/gluster/exp5 192.168.10.103:/opt/gluster/exp6 force

volume create: repl-volume: success: please start the volume to access data

 

 

4.3、查看卷的状态(在node1上操作即可):

[root@linux-node1 ~]# gluster volume info repl-volume

Volume Name: repl-volume

Type: Replicate

Volume ID: 37c5200f-75f7-4f53-aca3-0a733a192708

Status: Created

Snapshot Count: 0

Number of Bricks: 1 x 3 = 3

Transport-type: tcp

Bricks:

Brick1: 192.168.10.101:/opt/gluster/exp4

Brick2: 192.168.10.102:/opt/gluster/exp5

Brick3: 192.168.10.103:/opt/gluster/exp6

Options Reconfigured:

transport.address-family: inet

nfs.disable: on

performance.client-io-threads: off

 

 

 

5、条带卷(==raid0):

 

5.1、创建数据存储目录(在node1、node2、node3上操作):

[root@linux-node1 ~]# mkdir /opt/gluster/exp7

[root@linux-node2 ~]# mkdir /opt/gluster/exp8

[root@linux-node3 ~]# mkdir /opt/gluster/exp9

 

5.2、创建复制卷(在node1上操作即可):

[root@linux-node1 ~]# gluster volume create raid0-volume stripe 3 transport tcp 192.168.10.101:/opt/gluster/exp7 192.168.10.102:/opt/gluster/exp8 192.168.10.103:/opt/gluster/exp9 force

volume create: riad0-volume: success: please start the volume to access data

 

 

5.3、查看卷的状态:

[root@linux-node1 ~]# gluster volume info raid0-volume

Volume Name: raid0-volume

Type: Stripe

Volume ID: 61a654be-74c5-4514-80b3-ee2072df4c89

Status: Created

Snapshot Count: 0

Number of Bricks: 1 x 3 = 3

Transport-type: tcp

Bricks:

Brick1: 192.168.10.101:/opt/gluster/exp7

Brick2: 192.168.10.102:/opt/gluster/exp8

Brick3: 192.168.10.103:/opt/gluster/exp9

Options Reconfigured:

transport.address-family: inet

nfs.disable: on

 

 

 

6、想要用这些卷就要将它们启动:

6.1、查看:

[root@linux-node1 ~]# gluster volume status

Volume repl-volume is not started

Volume raid0-volume is not started

Volume test-volume is not started

 

6.2、启动:

[root@linux-node1 ~]# gluster volume start repl-volume

volume start: repl-volume: success

[root@linux-node1 ~]# gluster volume start raid0-volume

volume start: riad0-volume: success

[root@linux-node1 ~]# gluster volume start test-volume

volume start: test-volume: success

 

6.3、再查看:

[root@linux-node1 ~]# gluster volume info

Volume Name: repl-volume

Type: Replicate

Volume ID: 37c5200f-75f7-4f53-aca3-0a733a192708

Status: Started

Snapshot Count: 0

Number of Bricks: 1 x 3 = 3

Transport-type: tcp

Bricks:

Brick1: 192.168.10.101:/opt/gluster/exp4

Brick2: 192.168.10.102:/opt/gluster/exp5

Brick3: 192.168.10.103:/opt/gluster/exp6

Options Reconfigured:

transport.address-family: inet

nfs.disable: on

performance.client-io-threads: off

 

Volume Name: raid0-volume

Type: Stripe

Volume ID: 61a654be-74c5-4514-80b3-ee2072df4c89

Status: Started

Snapshot Count: 0

Number of Bricks: 1 x 3 = 3

Transport-type: tcp

Bricks:

Brick1: 192.168.10.101:/opt/gluster/exp7

Brick2: 192.168.10.102:/opt/gluster/exp8

Brick3: 192.168.10.103:/opt/gluster/exp9

Options Reconfigured:

transport.address-family: inet

nfs.disable: on

 

Volume Name: test-volume

Type: Distribute

Volume ID: e2248fcf-a37c-44af-92ce-9e31a3a98764

Status: Started

Snapshot Count: 0

Number of Bricks: 3

Transport-type: tcp

Bricks:

Brick1: 192.168.10.101:/opt/gluster/exp1

Brick2: 192.168.10.102:/opt/gluster/exp2

Brick3: 192.168.10.103:/opt/gluster/exp3

Options Reconfigured:

transport.address-family: inet

nfs.disable: on

 

 

 

 

7、挂载使用测试:

7.1、在客户端上安装glusterfs-client客户端(在node5上操作):

 

[root@linux-node5 ~]# yum install centos-release-gluster

或者yum install -y glusterfs  glusterfs-fuse glusterfs-rdma

[root@linux-node5 ~]# yum install -y glusterfs-client

7.2、创建挂载目录(在node5上操作):

[root@linux-node5 ~]# mkdir /mnt/a1 /mnt/a2 /mnt/a3

 

7.3、挂载(在node5上操作):

[root@linux-node5 ~]# mount.glusterfs 192.168.10.101:/test-volume /mnt/a1/

[root@linux-node5 ~]# mount.glusterfs 192.168.10.101:/repl-volume /mnt/a2/

[root@linux-node5 ~]# mount.glusterfs 192.168.10.101:/raid0-volume /mnt/a3/

 

7.4、查看(在node5上操作):

[root@linux-node5 ~]# df -hT

Filesystem                   Type            Size  Used Avail Use% Mounted on

/dev/mapper/centos-root      xfs              18G  3.9G   14G  23% /

devtmpfs                     devtmpfs        479M     0  479M   0% /dev

tmpfs                        tmpfs           489M     0  489M   0% /dev/shm

tmpfs                        tmpfs           489M  6.8M  483M   2% /run

tmpfs                        tmpfs           489M     0  489M   0% /sys/fs/cgroup

/dev/sda1                    xfs             497M  125M  373M  26% /boot

tmpfs                        tmpfs            98M     0   98M   0% /run/user/0

192.168.10.101:/test-volume  fuse.glusterfs   53G   13G   40G  24% /mnt/a1

192.168.10.101:/repl-volume  fuse.glusterfs   18G  4.5G   14G  26% /mnt/a2

192.168.10.101:/riad0-volume fuse.glusterfs   53G   13G   40G  24% /mnt/a3

 

7.5、写入内容(在node5上操作):

[root@linux-node5 ~]# echo abc > /mnt/a1/test1.txt  #写入分布式卷

[root@linux-node5 ~]# echo aaa > /mnt/a1/test2.txt

[root@linux-node5 ~]#

[root@linux-node5 ~]# echo aaa > /mnt/a2/test3.txt  #写入复制卷

[root@linux-node5 ~]#

[root@linux-node5 ~]# echo aaa > /mnt/a3/test4.txt  #写入条带卷

 

7.6、查看结果(在node1、node2、node3上操作):

[root@linux-node1 ~]# tree /opt/gluster/

/opt/gluster/

├── exp1

│   └── test2.txt

├── exp4

│   └── test3.txt

└── exp7

    └── test4.txt

 

 

[root@linux-node2 ~]# tree /opt/gluster/

/opt/gluster/

├── exp2

│   └── test1.txt

├── exp5

│   └── test3.txt

└── exp8

    └── test4.txt

 

[root@linux-node3 ~]# tree /opt/gluster/

/opt/gluster/

├── exp3

├── exp6

│   └── test3.txt

└── exp9

    └── test4.txt

 

 

 

 

 

 

 

 

 

8、分布式复制卷(推荐用):

8.1、创建数据存储目录(在node1、node2、node3上操作):

[root@linux-node1 ~]# mkdir /opt/gluster/exp10 /opt/gluster/exp11

[root@linux-node2 ~]# mkdir /opt/gluster/exp10 /opt/gluster/exp11

[root@linux-node3 ~]# mkdir /opt/gluster/exp10 /opt/gluster/exp11

 

 

8.2、创建分布式复制卷(在node1上操作即可):

[root@linux-node1 ~]# gluster volume create test1-volume replica 3 transport tcp 192.168.10.101:/opt/gluster/exp10/ 192.168.10.102:/opt/gluster/exp10/ 192.168.10.103:/opt/gluster/exp10/ 192.168.10.101:/opt/gluster/exp11/ 192.168.10.102:/opt/gluster/exp11/ 192.168.10.103:/opt/gluster/exp11/ force

volume create: test1-volume: success: please start the volume to access data

 

8.3、启动分布式复制卷(在node1上操作即可):

[root@linux-node1 ~]# gluster volume start test1-volume

volume start: test1-volume: success

 

 

8.4、在客户端上测试(在node5上操作):

[root@linux-node5 ~]# mkdir /mnt/aaa  //创建挂载目录

[root@linux-node5 ~]# mount.glusterfs 192.168.10.101:/test1-volume /mnt/aaa/   #挂载

[root@linux-node5 ~]# df -hT   #查看是否挂载成功

Filesystem                   Type            Size  Used Avail Use% Mounted on

/dev/mapper/centos-root      xfs              18G  3.9G   14G  23% /

devtmpfs                     devtmpfs        479M     0  479M   0% /dev

tmpfs                        tmpfs           489M     0  489M   0% /dev/shm

tmpfs                        tmpfs           489M  6.8M  483M   2% /run

tmpfs                        tmpfs           489M     0  489M   0% /sys/fs/cgroup

/dev/sda1                    xfs             497M  125M  373M  26% /boot

tmpfs                        tmpfs            98M     0   98M   0% /run/user/0

192.168.10.101:/test-volume  fuse.glusterfs   53G   13G   40G  24% /mnt/a1

192.168.10.101:/repl-volume  fuse.glusterfs   18G  4.5G   14G  26% /mnt/a2

192.168.10.101:/riad0-volume fuse.glusterfs   53G   13G   40G  24% /mnt/a3

192.168.10.101:/test1-volume fuse.glusterfs   18G  4.5G   14G  26% /mnt/aaa

 

[root@linux-node5 ~]# echo 1 > /mnt/aaa/1.txt   #写入内容

[root@linux-node5 ~]# echo 1 > /mnt/aaa/2.txt

[root@linux-node5 ~]# echo 1 > /mnt/aaa/3.txt

[root@linux-node5 ~]# echo 1 > /mnt/aaa/4.txt

 

 

 

8.5、查看结果(在node1、node2、node3上操作):

[root@linux-node1 ~]# tree /opt/gluster/

/opt/gluster/

├── exp1

│   └── test2.txt

├── exp10

│   └── 4.txt

├── exp11

│   ├── 1.txt

│   ├── 2.txt

│   └── 3.txt

 

 

[root@linux-node2 ~]# tree /opt/gluster/

/opt/gluster/

├── exp10

│   └── 4.txt

├── exp11

│   ├── 1.txt

│   ├── 2.txt

│   └── 3.txt

 

[root@linux-node3 ~]# tree /opt/gluster/

/opt/gluster/

├── exp10

│   └── 4.txt

├── exp11

│   ├── 1.txt

│   ├── 2.txt

│   └── 3.txt

 

 

 

 

添加扩容和删除卷(https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Managing%20Volumes/):

 

添加扩容卷分布式卷:
9.1、在客户端写入数据(在node5上操作):

[root@linux-node5 ~]# touch /mnt/a1/{10..19}.txt

[root@linux-node5 ~]# ll /mnt/a1/

total 1

-rw-r--r-- 1 root root 0 Sep 18 11:39 10.txt

-rw-r--r-- 1 root root 0 Sep 18 11:39 11.txt

-rw-r--r-- 1 root root 0 Sep 18 11:39 12.txt

-rw-r--r-- 1 root root 0 Sep 18 11:39 13.txt

-rw-r--r-- 1 root root 0 Sep 18 11:39 14.txt

-rw-r--r-- 1 root root 0 Sep 18 11:39 15.txt

-rw-r--r-- 1 root root 0 Sep 18 11:39 16.txt

-rw-r--r-- 1 root root 0 Sep 18 11:39 17.txt

-rw-r--r-- 1 root root 0 Sep 18 11:39 18.txt

-rw-r--r-- 1 root root 0 Sep 18 11:39 19.txt

-rw-r--r-- 1 root root 4 Sep 18 10:36 test1.txt

 

 

9.2、创建一个目录并添加卷(在node1上操作):

[root@linux-node1 ~]# mkdir /opt/gluster/exp12  //创建要添加的目录

[root@linux-node1 ~]# gluster volume add-brick test-volume 192.168.10.101:/opt/gluster/exp12/ force     //添加卷

volume add-brick: success

[root@linux-node1 ~]# gluster volume info test-volume   //查看

 

Volume Name: test-volume

Type: Distribute

Volume ID: e2248fcf-a37c-44af-92ce-9e31a3a98764

Status: Started

Snapshot Count: 0

Number of Bricks: 4

Transport-type: tcp

Bricks:

Brick1: 192.168.10.101:/opt/gluster/exp1

Brick2: 192.168.10.102:/opt/gluster/exp2

Brick3: 192.168.10.103:/opt/gluster/exp3

Brick4: 192.168.10.101:/opt/gluster/exp12

Options Reconfigured:

transport.address-family: inet

nfs.disable: on

[root@linux-node1 ~]# tree /opt/gluster/exp12/  //查看没有数据过来,是因为没有开启均衡

/opt/gluster/exp12/

 

0 directories, 0 files

[root@linux-node1 ~]# gluster volume rebalance test-volume start //现在开启均衡

volume rebalance: test-volume: success: Rebalance on test-volume has been started successfully. Use rebalance status command to check status of the rebalance process.

ID: bcb58321-2f43-4b09-96a1-5833d020b7b2

[root@linux-node1 ~]# tree /opt/gluster/exp12/   //再次验证就有数据了

/opt/gluster/exp12/

├── 11.txt

├── 14.txt

└── 16.txt

 

 

10、删除卷:

 

[root@linux-node1 ~]# gluster volume remove-brick test-volume 192.168.10.101:/opt/gluster/exp12 start

Running remove-brick with cluster.force-migration enabled can result in data corruption. It is safer to disable this option so that files that receive writes during migration are not migrated.

Files that are not migrated can then be manually copied after the remove-brick commit operation.

Do you want to continue with your current cluster.force-migration settings? (y/n) y

volume remove-brick start: success

ID: 248846f3-c47d-4097-bcd1-6fc625d95d66

 

10.1、删除后验证数据是否还在:

[root@linux-node1 ~]# tree /opt/gluster/exp12/  //在node1的目录exp12上没有数据了

/opt/gluster/exp12/

 

 

10.2、数据分到了node3上:

[root@linux-node3 ~]# tree /opt/gluster/  //数据11、14、16分过来了

/opt/gluster/

├── exp3

│   ├── 10.txt

│   ├── 11.txt

│   ├── 12.txt

│   ├── 14.txt

│   ├── 15.txt

│   ├── 16.txt

│   └── 18.txt

 

 

 

 

 

 

 

总结:

GlusterFS六大卷说明

 

第一,分布卷

   在分布式卷文件被随机地分布在整个砖的体积。使用分布式卷,你需要扩展存储,冗余是重要或提供其他硬件/软件层。(簡介:分布式卷,文件通过hash算法随机的分布到由bricks组成的卷上。卷中资源仅在一台服务器上存储,在存储池中非镜像或条带模式。)

Create the distributed volume:

# gluster volume create NEW-VOLNAME [transport [tcp | rdma | tcp,rdma]]

NEW-BRICK...

For example, to create a distributed volume with four storage servers using tcp:

# gluster volume create test-volume server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4

Creation of test-volume has been successful

Please start the volume to access data.

(Optional) You can display the volume information:

 

 

 

第二,复制卷

   复制卷创建跨多个砖的体积的文件的副本。您可以使用复制卷在环境中的高可用性和高可靠性是至关重要的。(簡介:复制式卷,类似raid1,replica数必须等于volume中brick所包含的存储服务器数,可用性高。创建一个两两互为备份的卷,存储池中一块硬盘损坏,不会影响到数据的使用,最少需要两台服务器才能创建分布镜像卷。)

Create the replicated volume:

# gluster volume create NEW-VOLNAME [replica COUNT] [transport [tcp |

rdma | tcp,rdma]] NEW-BRICK...

For example, to create a replicated volume with two storage servers:

# gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2

Creation of test-volume has been successful

Please start the volume to access data.

 

 

第三,条带卷

   条带卷条纹砖之间的数据的容量。为了达到最佳效果,你应该使用条带卷,只有在高并发环境下,访问非常大的文件。(簡介:条带式卷,类似与raid0,stripe数必须等于volume中brick所包含的存储服务器数,文件被分成数据块,以Round Robin的方式存储在bricks中,并发粒度是数据块,大文件性能好。)

 

Create the striped volume:

# gluster volume create NEW-VOLNAME [stripe COUNT] [transport [tcp |

rdma | tcp,rdma]] NEW-BRICK...

For example, to create a striped volume across two storage servers:

# gluster volume create test-volume stripe 2 transport tcp server1:/exp1 server2:/exp2

Creation of test-volume has been successful

Please start the volume to access data.

 

 

 

第四,分布式条带卷(复合型)

   分布式条带卷条带文件在集群中的两个或两个以上的节点。为了达到最佳效果,你应该使用分布式条带卷的要求是扩展存储和高访问非常大的文件的并发环境是至关重要的。(簡介:分布式的条带卷,volume中brick所包含的存储服务器数必须是stripe的倍数(>=2倍),兼顾分布式和条带式的功能。每个文件分布在四台共享服务器上,通常用于大文件访问处理,最少需要 4 台服务器才能创建分布条带卷。)

Create the distributed striped volume:

# gluster volume create NEW-VOLNAME [stripe COUNT] [transport [tcp |

rdma | tcp,rdma]] NEW-BRICK...

For example, to create a distributed striped volume across eight storage servers:

# gluster volume create test-volume stripe 4 transport tcp server1:/exp1 server2:/exp2

server3:/exp3 server4:/exp4 server5:/exp5 server6:/exp6 server7:/exp7 server8:/exp8

Creation of test-volume has been successful

Please start the volume to access data.

 

 

 

 

第五,分布式复制卷(复合型)

分配文件在复制砖的体积。您可以使用分布式复制卷要求规模的环境中存储和高可靠性是至关重要的。分布复制卷也提供了更好的读取性能在大多数环境

。(簡介:分布式的复制卷,volume中brick所包含的存储服务器数必须是 replica 的倍数(>=2倍),兼顾分布式和复制式的功能。)

Create the distributed replicated volume:

# gluster volume create NEW-VOLNAME [replica COUNT] [transport [tcp |

rdma | tcp,rdma]] NEW-BRICK...

For example, four node distributed (replicated) volume with a two-way mirror:

# gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4

Creation of test-volume has been successful

Please start the volume to access data.

For example, to create a six node distributed (replicated) volume with a two-way mirror:

# gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4 server5:/exp5 server6:/exp6

Creation of test-volume has been successful

Please start the volume to access data.

 

 

 

第六, 条带复制卷(复合型)

条带复制卷条带数据在复制集群中的砖。为了达到最佳效果,你应该使用条纹复制卷在高并发环境下并行访问非常大的文件和性能是至关重要的。在此版本中,这种类型的卷配置仅支持地图减少工作量。

Create a striped replicated volume :

# gluster volume create NEW-VOLNAME [stripe COUNT] [replica COUNT]

[transport [tcp | rdma | tcp,rdma]] NEW-BRICK...

For example, to create a striped replicated volume across four storage servers:

# gluster volume create test-volume stripe 2 replica 2 transport tcp server1:/exp1

server2:/exp2 server3:/exp3 server4:/exp4

Creation of test-volume has been successful

Please start the volume to access data.

To create a striped replicated volume across six storage servers:

# gluster volume create test-volume stripe 3 replica 2 transport tcp server1:/exp1

server2:/exp2 server3:/exp3 server4:/exp4 server5:/exp5 server6:/exp6

Creation of test-volume has been successful

Please start the volume to access data.

 

 

 

 

 

 

 

第七,分布式条带复制卷(三種混合型)

   分布式条带复制卷分布条带数据在复制砖集群。为了获得最佳效果,你应该使用分布在高并发的条带复制卷环境下并行访问非常大的文件和性能是至关重要的。在此版本中,这种类型的卷配置仅支持地图减少工作量。

 

Create a distributed striped replicated volume using the following command:

# gluster volume create NEW-VOLNAME [stripe COUNT] [replica COUNT]

[transport [tcp | rdma | tcp,rdma]] NEW-BRICK...

For example, to create a distributed replicated striped volume across eight storage servers:

# gluster volume create test-volume stripe 2 replica 2 transport tcp server1:/exp1

server2:/exp2 server3:/exp3 server4:/exp4 server5:/exp5 server6:/exp6 server7:/exp7

server8:/exp8