hadoop-安装

时间:2019-09-19
本文章向大家介绍hadoop-安装,主要包括hadoop-安装使用实例、应用技巧、基本知识点总结和需要注意事项,具有一定的参考价值,需要的朋友可以参考一下。

由于篇幅较大,废话不多说,直奔主题。

hadoop 安装同样可分为 单机模式、伪分布式、完全分布式

本文主要介绍完全分布式,环境 centos 6.5,hadoop-2.6.5

第一步:配置好 4 台虚拟机或者物理机,具体步骤参考我的其他博客

第二步:查看主机名,并修改          【务必记住修改主机名的方法,很多地方要用,不过不同版本方法不同】

[root@localhost ~]# hostname
localhost.localdomain

[root@localhost ~]# vi /etc/sysconfig/network

[root@localhost ~]# hostname
localhost.localdomain

修改为 

NETWORKING=yes
HOSTNAME=hadoop10

由于这种方法需要重启才能生效,故 再查 hostname 没有变化,这里我不想重启,直接使用 临时更改命令

[root@localhost ~]# hostname hodoop10
[root@localhost ~]# hostname
hodoop10

重启失效

依次修改 4 台电脑的 hostname

第三步:编辑 IP 与 hostname 的映射表 /etc/hosts

这个文件和 hostname 的修改没有任何关系,他需要放在集群中的每个节点,以告知每个节点 各个 IP 对应的 hostname,相当于 DNS

vi 命令,加入下面内容

192.168.10.10 hadoop10
192.168.10.11 hadoop11
192.168.10.12 hadoop12
192.168.10.13 hadoop13

依次修改 4 台电脑的 /etc/hosts

第四步:关闭防火墙

[root@hadoop11 ~]# chkconfig iptables off

[root@localhost ~]# chkconfig --list iptables
iptables           0:off    1:off    2:off    3:off    4:off    5:off    6:off

第五步:ssh 免密登录

1. 首先查看是否已经安装了 ssh

[root@localhost ~]# ssh localhost
The authenticity of host 'localhost (::1)' can't be established.
RSA key fingerprint is d1:40:d3:50:c8:2d:af:d4:a0:d4:cb:9f:6d:8d:ed:2f.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'localhost' (RSA) to the list of known hosts.
root@localhost's password: 
Last login: Tue Sep 17 01:11:07 2019 from 192.168.10.1

出现如上界面,说明已经安装了 ssh,如果没有,用下面命令安装

yum install openssh-server -y

2. 进入登录用户的 home 目录

[root@hodoop10 ~]# cd ~/.ssh
[root@hodoop10 .ssh]# ls
known_hosts

刚开始该目录下只有一个文件,这个文件是 修改了 /etc/hosts 后产生的

3. 产生 公钥 和 私钥

[root@hodoop10 .ssh]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
df:db:71:2b:a7:59:96:95:88:cd:0d:7e:25:85:f1:0d root@hodoop10
The key's randomart image is:
+--[ RSA 2048]----+
|              Eo.|
|              .+.|
|             .. +|
|            = +.o|
|        S  . = +.|
|         . .  . o|
|          . . .+.|
|             +++.|
|            .o=. |
+-----------------+

一路回车,不需要任何其他操作

此时再 ls 目录,可以看到 公钥和私钥

[root@hodoop10 .ssh]# ls
id_rsa  id_rsa.pub  known_hosts

依次为 4 台电脑创建公钥和私钥

4. 把 公钥 发送给其他 节点

首先把 本台电脑 的公钥放到 authorized_keys 文件里

[root@hodoop10 .ssh]# cat id_rsa.pub >>  authorized_keys
[root@hodoop10 .ssh]# ls
authorized_keys  id_rsa  id_rsa.pub  known_hosts

然后把 authorized_keys 发送给其他所有节点

[root@hodoop10 .ssh]# scp authorized_keys root@hadoop12:~/.ssh
The authenticity of host 'hadoop12 (192.168.10.12)' can't be established.
RSA key fingerprint is 43:68:54:4e:85:ed:ac:30:7c:b2:a1:48:02:b9:67:57.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'hadoop12,192.168.10.12' (RSA) to the list of known hosts.
root@hadoop12's password: 
authorized_keys                

可能还需要修改 authorized_keys 的权限  644;

此时可测试,该节点免密登录其他节点,不需要输密码

[root@hodoop10 .ssh]# ssh hadoop12
Last login: Tue Sep 17 01:49:50 2019 from localhost

[root@hadoop12 ~]# exit
logout
Connection to hadoop12 closed.

登录成功,并退出

依次在 4 台电脑上重复上步操作

最终 authorized_keys 文件包含了 4 个节点的公钥

ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAuFkD0t6HZM/H7pyqjqBnrnF+4wr2gI8p4wjCDdN8smAH8ujLviUAK0rE1Gh8bcXtWSjLmFLOf1oQwrCvtWnP4q9+enFwgqFFLEkQvT5jRbKrJImYWpafGimOlO5hb1jPZKrxpRZlMy9LFzLnfr5aJ+fES
E2sSrTwlXbfXm0w1xhBKzoo5JZq8xIvzYXYQ8qyaTRFd2+EZbZKJ0CgVw83hKjiq9bjrbqtEg2oo8FdQwi4SNZ6d4jozhw54J8nCk8YduVneYoFSf1gmdwUcMb2iyGUfMRrhK3k0vUxBZKsfrG9aS4P4Gzd/CVGtMlqEWVldyTS9vmORHNAHEFqdyVI/w== root@hodoop10
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA7pZA4t2E00jJtotZeFST+HWXrAtzfjGFBvDkpnqwoYs1cEjsr8Ez2XjWbcdGBqbEFNohTWUh0dpfQHyWcT2fun10aRJ9GyYuebzSJm5BWT06PKWB5QavqNtdmqNTSzEfNXGjyvaV8PbfFA8kfIeaiq0/u TwTrtjcLHmN9ENm1NjJqibZxNSNJnQGXJs7Gj6ujIXrVmr//G9OqS97ZM5slgHw68F7azvpCfzHBsJu3QTZYL96WRUSRXHH8GteRMtBYVlRzg7N1gU+YKx4fMXjEk7xu/p8ub5IG5kClCIU+mR+Z0VNReGVP3n4GZuE/Fa3OMerESUs6i/GWczNbA2cSQ== root@hadoop11
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAqL/aQhVUd4B7VsfnzOFEXFQJX/rV1obelijX6M/eVns2IlpxB54UUgYoAet97Xew5vc31tAAbURW8zS4CAJujKWKFnAB/R2UIzLww6CxahsTqrsPkj89SiLl3Q4SsBDC49hULfbd5AxuEdq/v0XIFT2js bpaUtWQ2pF5HxzkhpnrpEbcwHjc14GfM1cFtyPcR3XXZC4P+scaLGgdn8I3So0k6ENqo7LfQ7y2/FNQMXtKxObfO0j7bESsNWQxPGwolXdVeBO4VEYIrYH/6/gPdOxtNGe2gCnr8MM8z7eElLXy1cF5wTddv6vCdBv9bl5H3/BHtUrJ+/5/XjkkyRVECw== root@hadoop12
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAvOn53kK/2uoDBKKq/0LQhJ63S34K6lnksgAGJYWTugx57TxroRvms2DkdrV3EKhlIzVkpE3Xzrx4hyOFHXfnfAdsrvj22zgsPx4cNxM0Tmx6ELwCpcLPF381lDjEc5/7MEqQB+wV07tjAZAXOl5wETLLO 269iHvbX3oEZ3Q62xq52BLoKCkBunk5C0lVDHAhKtzBp1XTntixircUIxpNWWduhoUwiaTrUrki8gEyC2O/Hm9Wq6h2RyC7SvH8jaAZoC9UUso50TitD10J5bhdeg8iYnhb/wUJZ5zhkwSJuj8H4j8huCo5j/eX7sPXe/3eKnVlpEz/PX0/8eAQYJY6SQ== root@hadoop13

最终实现 每台电脑可以免密登录其他所有电脑

第六步:安装 java 

方法很多,可自行百度

首先检查是否已经安装 java

yum list installed |grep java

yum 查看可用版本,并安装

yum -y list java*
yum -y install java-1.8.0-openjdk*

检测版本

[root@node .ssh]# java -version
openjdk version "1.8.0_181"

第七步:安装 hadoop

下载地址  hadoop,注意不要下载 包含 src 的 tar 包,否则踩坑

解压即可

然后设置环境变量,测试是否安装成功

[root@hadoop10 lib]# vi /etc/profile
[root@hadoop10 lib]# source /etc/profile
[root@hadoop10 lib]# hadoop
Usage: hadoop [--config confdir] COMMAND
       where COMMAND is one of:
  fs                   run a generic filesystem user client
  version              print the version
  jar <jar>            run a jar file
  checknative [-a|-h]  check native hadoop and compression libraries availability
  distcp <srcurl> <desturl> copy file or directories recursively
  archive -archiveName NAME -p <parent path> <src>* <dest> create a hadoop archive
  classpath            prints the class path needed to get the
  credential           interact with credential providers
                       Hadoop jar and the required libraries
  daemonlog            get/set the log level for each daemon
  trace                view and modify Hadoop tracing settings
 or
  CLASSNAME            run the class named CLASSNAME

Most commands print help when invoked w/o parameters.

说明安装成功。

环境变量设置如下

export HADOOP_HOME=/usr/lib/hadoop-2.6.5
export PATH=.:$HADOOP_HOME/bin:$PATH

依次在 4 台电脑上安装 hadoop

注意,只执行 第七步 就是 单机安装模式,对,只需这一步,然后我们这里对单机模式做个小测试

进入 hadoop 根目录,建个 input 文件夹,然后上传一个文件 log.txt 到 input,然后在根目录执行 统计词频 的命令

[root@hadoop10 lib]# cd hadoop-2.6.5
[root@hadoop10 hadoop-2.6.5]# ls
bin  etc  include  lib  libexec  LICENSE.txt  NOTICE.txt  README.txt  sbin  share
[root@hadoop10 hadoop-2.6.5]# mkdir input
[root@hadoop10 input]# ls
log.txt

[root@hadoop10 hadoop-2.6.5]# hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.5.jar wordcount input output 

19/09/17 23:07:20 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
19/09/17 23:07:20 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
19/09/17 23:07:20 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
19/09/17 23:07:20 INFO input.FileInputFormat: Total input paths to process : 1
19/09/17 23:07:20 INFO mapreduce.JobSubmitter: number of splits:1
19/09/17 23:07:20 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local1348719737_0001
19/09/17 23:07:21 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
19/09/17 23:07:21 INFO mapreduce.Job: Running job: job_local1348719737_0001
19/09/17 23:07:21 INFO mapred.LocalJobRunner: OutputCommitter set in config null
19/09/17 23:07:21 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
19/09/17 23:07:21 INFO mapred.LocalJobRunner: Waiting for map tasks
19/09/17 23:07:21 INFO mapred.LocalJobRunner: Starting task: attempt_local1348719737_0001_m_000000_0
19/09/17 23:07:21 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
19/09/17 23:07:21 INFO mapred.MapTask: Processing split: file:/usr/lib/hadoop-2.6.5/input/log.txt:0+183
19/09/17 23:07:21 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
19/09/17 23:07:21 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
19/09/17 23:07:21 INFO mapred.MapTask: soft limit at 83886080
...

19/09/17 23:07:22 INFO mapreduce.Job: map 100% reduce 100%

第八步:配置 hadoop

hadoop 的配置文件都在 /etc/hadoop 下

配置:/etc/hadoop/hadoop-env.sh

修改 java 环境变量为 绝对路径

配置:/etc/hadoop/core-site.xml

设置 hdfs 的 Namenode 地址;

设置 hadoop 运行时临时文件的存储路径

<configuration>

        <property>
                <name>fs.defaultFS</name>
                <value>hdfs://hadoop10:8020</value>
        </property>
        <property>
                  <name>hadoop.tmp.dir</name>
                  <value>/opt/module/hadoop-2.6.5/data/tmp</value>
        </property>

</configuration>

如果没有设置 hadoop.tmp.dir,默认存储路径在 /tmp/hadoop-username 下

配置:/etc/hadoop/hdfs-site.xml

设置 hdfs 的备份数,默认为 3

<configuration>

    <property>
        <name>dfs.replication</name>
        <value>5</value>
    </property>

    <property>
         <name>dfs.namenode.secondary.http-address</name>
         <value>hadoop10:50090</value>
     </property>

<configuration>

配置:/etc/hadoop/mapred-site.xml

修改 mapreduce 配置文件,设置 jobTracker 的地址和端口

<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
</configuration>

指定 mr 运行在 yarn 上

配置:/etc/hadoop/slaves

删除原有内容,写入所有节点 hostname,这样可以一键启动整个集群

hadoop10
hadoop11
hadoop12
hadoop13

配置:/etc/hadoop/yarn-env.sh   和 /etc/hadoop/mapred-env.sh

同样 将 java 环境变量改成 绝对路径    【这步不要也可以试试】

配置:/etc/hadoop/yarn-site.xml

 <!-- 指定YARN的老大(ResourceManager)的地址 -->
<property>
        <name>yarn.resourcemanager.hostname</name>
        <value>hadoop10</value>
</property>

<!-- reducer获取数据的方式 -->
<property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
 </property>

第九步:复制修改后的配置到其他电脑

scp -r /usr/lib/hadoop-2.6.5/etc/hadoop root@hadoop13:/usr/lib/hadoop-2.6.5/etc/

第十步:启动集群

格式化 namenode

集群搭好了,先把所有磁盘格式化一下,后面要存数据了,避免有杂质,同时创建一些东西。

注意:只在第一次启动是格式化,后面启动无需格式化

我们要看 namenode 设置在哪个节点上,然后在该节点上执行如下命令

bin/hdfs namenode -format

启动 namenode

[root@hadoop10 hadoop-2.6.5]# sbin/hadoop-daemon.sh start namenode
starting namenode, logging to /usr/lib/hadoop-2.6.5/logs/hadoop-root-namenode-hadoop10.out
[root@hadoop10 hadoop-2.6.5]# jps
3877 NameNode
3947 Jps

启动 datanode

[root@hadoop10 hadoop-2.6.5]# sbin/hadoop-daemon.sh start datanode
starting datanode, logging to /usr/lib/hadoop-2.6.5/logs/hadoop-root-datanode-hadoop10.out
[root@hadoop10 hadoop-2.6.5]# jps
3877 NameNode
4060 Jps
3982 DataNode

两步才启动 hdfs,是不是很麻烦,而且我们发现 SecondaryNameNode 并没有被启动,所以 hadoop 提供了其他启动方式

一步启动 hdfs:Namenode、Datanode、SecondaryNameNode

[root@hadoop10 hadoop-2.6.5]# sbin/start-dfs.sh 
19/09/18 18:37:23 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [hadoop10]
hadoop10: starting namenode, logging to /usr/lib/hadoop-2.6.5/logs/hadoop-root-namenode-hadoop10.out
hadoop10: starting datanode, logging to /usr/lib/hadoop-2.6.5/logs/hadoop-root-datanode-hadoop10.out
hadoop13: starting datanode, logging to /usr/lib/hadoop-2.6.5/logs/hadoop-root-datanode-hadoop13.out
hadoop12: starting datanode, logging to /usr/lib/hadoop-2.6.5/logs/hadoop-root-datanode-hadoop12.out
hadoop11: starting datanode, logging to /usr/lib/hadoop-2.6.5/logs/hadoop-root-datanode-hadoop11.out
Starting secondary namenodes [hadoop10]
hadoop10: starting secondarynamenode, logging to /usr/lib/hadoop-2.6.5/logs/hadoop-root-secondarynamenode-hadoop10.out
19/09/18 18:37:40 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[root@hadoop10 hadoop-2.6.5]# jps
6162 NameNode
6258 DataNode
6503 Jps
6381 SecondaryNameNode

启动 yarn

同样看 yarn 设置在哪个节点,yarn-site.xml,然后在该节点执行如下命令

[root@hadoop10 hadoop-2.6.5]# sbin/start-yarn.sh 
starting yarn daemons
starting resourcemanager, logging to /usr/lib/hadoop-2.6.5/logs/yarn-root-resourcemanager-hadoop10.out
hadoop10: starting nodemanager, logging to /usr/lib/hadoop-2.6.5/logs/yarn-root-nodemanager-hadoop10.out
hadoop13: starting nodemanager, logging to /usr/lib/hadoop-2.6.5/logs/yarn-root-nodemanager-hadoop13.out
hadoop11: starting nodemanager, logging to /usr/lib/hadoop-2.6.5/logs/yarn-root-nodemanager-hadoop11.out
hadoop12: starting nodemanager, logging to /usr/lib/hadoop-2.6.5/logs/yarn-root-nodemanager-hadoop12.out
[root@hadoop10 hadoop-2.6.5]# jps
6162 NameNode
6770 NodeManager
6258 DataNode
7012 Jps
6668 ResourceManager
6381 SecondaryNameNode

ResourceManager 和 NodeManager 带 yarn 启动

至此 hadoop 集群启动成功,包括 hdfs、yarn、mapreduce

好麻烦,所以 hadoop 提供了一键启动和一键关闭

sbin/start-all.sh 
sbin/stop-all.sh

第十一步:远程访问 hadoop 集群

namenode 的 IP

50070 端口 访问 hdfs  http://192.168.10.10:50070

8088 端口 访问 mapreduce http://192.168.10.10:8088

第十二步:简单测试

给 hdfs 文件系统中建立目录,两种方式

bin/hdfs dfs -mkdir -p /usr/input/yanshw
bin/hadoop fs -mkdir -p /usr/input/yansw

注意这个目录本地是看不见的,相当于在云端创建的目录

远程可访问

上传文件

bin/hadoop fs -put README.txt /usr/input/yanshw

远程可查看

执行命令

必须指定输入输出,输出 不能提前创建

hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.5.jar wordcount /usr/input/yanshw /usr/output/yanshw

远程查看

命令行远程操作 hdfs

hadoop fs -linux 命令

如打印输出

[root@hadoop10 usr]# hadoop fs -cat /usr/output/yanshw/p*
19/09/18 19:38:39 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
(BIS),    1
(ECCN)    1

异常记录

1. 找不到 jps

jps 是 查看 java 进程

找不到 jps 命令,说明 java 没装好,需要设置 java 环境变量

2.  重启后无法启动 datanode

通常在第一次搭建时可以成功,但是重启后不能成功,datanode 无法启动,原因是 datanode 无法被 namenode 识别。

namenode 在 format 时会形成两个标识,blockPoolId 和 clusterId;

当有 datanode 加入时,会获取这两个标识作为从属 这个 namenode 的标识,这样才能组成集群;

一旦 namenode 被重新 format,会更新这两个标识;

然而 datanode 还拿原来的标识过来接头,自然被拒之门外;

解决方法:删除所有节点的数据,即 tmp,包括 namenode 的数据,重新格式化,再启动

3. 各种操作都会有如下 警告

WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

无需理会,只是警告,确实想解决,参考 解决办法

参考资料:

https://www.cnblogs.com/laov/p/3421479.html              hadoop1.2.1   

https://blog.csdn.net/baidu_28997655/article/details/81586418   hadoop2.6.5

https://blog.csdn.net/qq285016127/article/details/80501418    hadoop2.6.4

https://www.cnblogs.com/xia520pi/archive/2012/05/16/2503949.html#_label3_0    讲得很详细

原文地址:https://www.cnblogs.com/yanshw/p/11535633.html