CentOS6.5安装CDH5.13

时间:2022-05-06
本文章向大家介绍CentOS6.5安装CDH5.13,主要内容包括其使用实例、应用技巧、基本知识点总结和需要注意事项,具有一定的参考价值,需要的朋友可以参考一下。

温馨提示:要看高清无码套图,请使用手机打开并单击图片放大查看。

1.文档编写目的


Cloudera前一段时间发布了CDH5.13版本,5.13的新功能可以参考前一篇文章CDH5.13和CM5.13的新功能,本文章主要讲述如何在CentOS6.5安装CDH5.13,集群安装的准备工作大家可以参考之前的文章CDH安装前置准备请务必保证安装前已经仔细阅读了CDH安装前置准备,并按照文档所述做好了前置准备,包括基础环境准备相关如yum源,ntp配置等。

  • 内容概述

1.前置条件准备

2.Cloudera Manager安装

3.CDH安装

4.Kudu安装

5.组件验证

  • 测试环境

1.CentOS6.5

2.采用root用户操作

  • 前置条件

1.CM和CDH版本为5.13

2.已下载好CM和CDH的安装包

2.安装Cloudera Manager Server


1.在CM节点使用如下命令安装Cloudera Manager Server服务

root@ip-172-31-6-148~# yum -y install cloudera-manager-server

2.初始化CM数据库

[root@ip-172-31-6-148~]#/usr/share/cmf/schema/scm_prepare_database.sh mysql cm cm password
JAVA_HOME=/usr/java/jdk1.7.0_67-cloudera
Verifying that we can write to /etc/cloudera-scm-server
Creating SCM configuration file in /etc/cloudera-scm-server
Executing:  /usr/java/jdk1.7.0_67-cloudera/bin/java-cp/usr/share/java/mysql-connector-java.jar:/usr/share/java/oracle-connector-java.jar:/usr/share/cmf/schema/../lib/*com.cloudera.enterprise.dbutil.DbCommandExecutor/etc/cloudera-scm-server/db.properties com.cloudera.cmf.db.
log4j:ERRORCould not find value for keylog4j.appender.A
log4j:ERRORCould not instantiate appender named "A".
[2017-10-1517:49:38,476] INFO     0[main] -com.cloudera.enterprise.dbutil.DbCommandExecutor.testDbConnection(DbCommandExecutor.java) - Successfullyconnected to database.
All done, your SCM database is configured correctly!
[root@ip-172-31-6-148~]# 

3.启动Cloudera Manager Server

root@ip-172-31-6-148~# service cloudera-scm-server start

Starting cloudera-scm-server: [ OK ]

root@ip-172-31-6-148~#

4.检查端口是否监听

root@ip-172-31-6-148~# netstat -apn |grep 7180

tcp 0 0 0.0.0.0:7180 0.0.0.0:* LISTEN 20963/java

root@ip-172-31-6-148~#

5.通过http:// 172.31.2.159:7180/cmf/login访问CM

3.CDH安装

3.1CDH集群安装向导


1.登录Cloudera Manager进入Web安装向导界面

2.指定集群安装主机

3.设置CDH的Parcel地址

4.设置CM存储库地址

5.集群安装JDK选项

6.集群安装模式

7.输入集群SSH登录信息

8.集群安装JDK及Cloudera Manager Aagent服务

9.向集群所有主机安装并激活Parcel

10.检查主机正确性

3.2CDH集群设置向导


1.设置集群安装的服务组合

2.自定义角色分配

3.数据库设置

4.配置审核更改

5.集群首次运行

6.集群安装成功

7.进入CM首页查看CM和CDH版本

3.3Kudu安装


从CDH5.13.0版本开始,已经将Kudu集成在CDH的Parcels包中。因此在安装上相较于之前更简单方便。

1.登录CM进入主页,在相应集群上点击“添加服务”

2.进入服务选择界面,选择“Kudu”

3.点击继续,进入Kudu角色分配,分配Kudu的Master和Tablet Server

4.点击继续,配置Kudu的WAL和Data目录

5.点击“继续”,将Kudu服务添加到集群并启动

6.点击“继续”,完成Kudu服务安装

7.查看CM主页,Kudu服务安装成功

8.配置Impala与Kudu集成

默认Impala即可直接操作Kudu进行SQL操作,但为了省去每次建表都需要在TBLPROPERTIES中添加kudu_master_addresses属性,建议在Impala的高级配置KuduMaster的地址:--kudu_master_hosts=ip-172-31-6-148.fayson.com:7051

保存配置,回到CM主页,根据提示重启相应服务。

4.快速组件服务验证

4.1HDFS验证(mkdir+put+cat+get)


[root@ip-172-31-6-148~]# hadoop fs -mkdir -p /fayson/test_table
[root@ip-172-31-6-148~]# cat a.txt 
1,test
2,fayson
3,zhangsan
[root@ip-172-31-6-148 ~]#hadoop fs -put a.txt /fayson/test_table
[root@ip-172-31-6-148~]# hadoop fs -cat /fayson/test_table/a.txt
1,test
2,fayson
3,zhangsan
[root@ip-172-31-6-148 ~]# rm -rf a.txt
[root@ip-172-31-6-148~]# hadoop fs -get /fayson/test_table/a.txt .
[root@ip-172-31-6-148~]# cat a.txt
1,test
2,fayson
3,zhangsan
[root@ip-172-31-6-148 ~]# 

4.2Hive验证


[root@ip-172-31-6-148~]# hive
hive> 
    > create external table test_table (
 >    s1 string,
 >    s2 string
 > )
 > row format delimited fields terminated by ','
 > stored as textfilelocation '/fayson/test_table';
OK
Time taken: 2.117 seconds
hive> select * from test_table;
OK
1       test
2       fayson
3       zhangsan
Time taken: 0.683 seconds, Fetched: 3 row(s)
hive> select count(*) from test_table;
...
OK
3
Time taken: 26.174 seconds, Fetched: 1 row(s)
hive> 

4.3MapReduce验证


root@ip-172-31-6-148~]# hadoop jar /opt/cloudera/parcels/CDH/lib/hadoop-0.20-mapreduce/hadoop-examples.jar pi 5 5
...
17/10/31 07:13:52 INFO mapreduce.Job:  map 100% reduce 100%
17/10/31 07:13:52 INFO mapreduce.Job: Job job_1509333728959_0018 completedsuccessfully
...
Job Finished in 22.662 seconds
Estimated value of Pi is 3.68000000000000000000
[root@ip-172-31-6-148 ~]# 

4.4Impala验证


[root@ip-172-31-6-148~]# impala-shell -i ip-172-31-9-33.fayson.com
...
[ip-172-31-9-33.fayson.com:21000] > invalidate metadata;
...
Fetched 0 row(s) in 4.18s
[ip-172-31-9-33.fayson.com:21000] > show tables;
Query: show tables
+------------+
| name      |
+------------+
| test       |
| test_table|
+------------+
Fetched 2 row(s) in 0.01s
[ip-172-31-9-33.fayson.com:21000] > select * from test_table;
...
+----+----------+
| s1 | s2       |
+----+----------+
| 1  | test     |
| 2  | fayson   |
| 3  | zhangsan |
+----+----------+
Fetched 3 row(s) in 5.42s
[ip-172-31-9-33.fayson.com:21000] > select count(*) from test_table;
...
+----------+
| count(*) |
+----------+
| 3       |
+----------+
Fetched 1 row(s) in 0.16s
[ip-172-31-9-33.fayson.com:21000] > 

4.5Spark验证


[root@ip-172-31-6-148~]# spark-shell
Welcome to
 ____              __
 / __/__  ___ _____/ /__
    _ / _ / _ `/__/  '_/
 /___/ .__/_,_/_/ /_/_   version 1.6.0
 /_/
Using Scala version 2.10.5 (Java HotSpot(TM) 64-Bit ServerVM, Java 1.7.0_67)
...
scala> var textFile=sc.textFile("/fayson/test_table/a.txt")
textFile: org.apache.spark.rdd.RDD[String] =/fayson/test_table/a.txt MapPartitionsRDD[1] at textFile at <console>:27

scala> textFile.count()
res0: Long = 3
 
scala> 

4.6Kudu验证


[root@ip-172-31-6-148~]# impala-shell -i ip-172-31-9-33.fayson.com
...
[ip-172-31-9-33.fayson.com:21000] > CREATE TABLE my_first_table (
 >     id BIGINT,
 >     name STRING,
 >     PRIMARY KEY(id)
 > )
 > PARTITION BY HASH PARTITIONS 16
 > STORED AS KUDU;
...
Fetched 0 row(s) in 2.40s
[ip-172-31-9-33.fayson.com:21000] > INSERT INTO my_first_table VALUES (99, "sarah");
...
Modified 1 row(s), 0 row error(s) in 4.14s
[ip-172-31-9-33.fayson.com:21000] > INSERT INTO my_first_table VALUES (1, "john"), (2, "jane"), (3, "jim");
...
Modified 3 row(s), 0 row error(s) in 0.11s
[ip-172-31-9-33.fayson.com:21000] > select * from my_first_table;
...
+----+-------+
| id | name  |
+----+-------+
| 1  | john  |
| 99 | sarah |
| 2  | jane  |
| 3  | jim   |
+----+-------+
Fetched 4 row(s) in 1.12s
[ip-172-31-9-33.fayson.com:21000] > delete from my_first_table where id =99;
...
Modified 1 row(s), 0 row error(s) in 0.17s
[ip-172-31-9-33.fayson.com:21000] > select * from my_first_table;
...
+----+------+
| id | name |
+----+------+
| 1  | john |
| 2  | jane |
| 3  | jim  |
+----+------+
Fetched 3 row(s) in 0.14s
[ip-172-31-9-33.fayson.com:21000] > update my_first_table set name='fayson' where id=1;      
...
Modified 1 row(s), 0 row error(s) in 0.14s
[ip-172-31-9-33.fayson.com:21000] > select * from my_first_table;
...
+----+--------+
| id | name   |
+----+--------+
| 2  | jane   |
| 3  | jim    |
| 1  | fayson |
+----+--------+
Fetched 3 row(s) in 0.04s
[ip-172-31-9-33.fayson.com:21000] > upsert  into my_first_table values(1, "john"), (2, "tom");
...
Modified 2 row(s), 0 row error(s) in 0.11s
[ip-172-31-9-33.fayson.com:21000] > select * from my_first_table;
...
+----+------+
| id | name |
+----+------+
| 2  | tom  |
| 3  | jim  |
| 1  | john |
+----+------+
Fetched 3 row(s) in 0.06s
[ip-172-31-9-33.fayson.com:21000] > select count(*) from my_first_table;
...
+----------+
| count(*) |
+----------+
| 3       |
+----------+
Fetched 1 row(s) in 0.39s
[ip-172-31-9-33.fayson.com:21000] >

为天地立心,为生民立命,为往圣继绝学,为万世开太平。 温馨提示:要看高清无码套图,请使用手机打开并单击图片放大查看。


推荐关注Hadoop实操,第一时间,分享更多Hadoop干货,欢迎转发和分享。

原创文章,欢迎转载,转载请注明:转载自微信公众号Hadoop实操