【完全分布式Hadoop】(九)高可用hadoop集群安装(HDFS HA,Yarn HA)

Hadoop admin 2年前 (2018-11-01) 367次浏览 0个评论 扫描二维码

一 完全分布式hadoop集群

Hadoop官方地址:https://hadoop.apache.org/

1 准备3台客户机

1.1防火墙,静态IP,主机名

https://www.codeobj.com/2018/10/%E3%80%90%E5%AE%8C%E5%85%A8%E5%88%86%E5%B8%83%E5%BC%8Fhadoop%E3%80%91%EF%BC%88%E4%B8%80%EF%BC%89%E4%BB%8E%E8%99%9A%E6%8B%9F%E6%9C%BAcentos6-5%E7%9A%84%E5%AE%89%E8%A3%85%E5%BC%80%E5%A7%8B/

1.2 修改host文件

在第一篇文章中

1.3 添加用户账号

https://www.codeobj.com/?p=300

1.4 安装配置jdk1.8

https://www.codeobj.com/?p=302

1.5 设置SSH免密钥

https://www.codeobj.com/?p=298

2 安装hadoop集群

2.1 集群部署规划

节点名称 NN JN DN ZKFC ZK RM NM
hadoop000 NameNode JournalNode DataNode ZKFC Zookeeper NodeManager
hadoop001 NameNode JournalNode DataNode ZKFC ZooKeeper ResourceManager NodeManager
hadoop002 JournalNode DataNode ZooKeeper ResourceManager NodeManager

2.2 安装Zookeeper集群

https://www.codeobj.com/?p=371

2.3 安装配置Hadoop集群

2.3.1 解压安装Hadoop

解压 hadoop-2.6.0-cdh5.7.0.tar.gz到/home/hadoop/app/目录下
[hadoop@hadoop000 software]$ tar -xzvf hadoop-2.6.0-cdh5.7.0.tar.gz -C /home/hadoop/app/

2.3.2 配置Hadoop集群

配置文件都在/home/hadoop/app/hadoop-2.6.0-cdh5.7.0/etc/hadoop

1) 修改hadoop-env.sh, mapred-env.sh ,yarn-env.sh 的JAVA环境变量

export JAVA_HOME=/usr/java/jdk1.8.0_45

2) 修改 core-site.xml

[hadoop@hadoop000 hadoop]$ vi core-site.xml

<configuration>
    <!--Yarn 需要使用 fs.defaultFS 指定NameNode URI -->
        <property>
                <name>fs.defaultFS</name>
                <value>hdfs://cluster</value>
        </property>
        <!--==============================Trash机制======================================= -->
        <property>
                <!--多长时间创建CheckPoint NameNode节点上运行的CheckPointer 从Current文件夹创建CheckPoint;默认:0 由fs.trash.interval项指定 -->
                <name>fs.trash.checkpoint.interval</name>
                <value>0</value>
        </property>
        <property>
                <!--多少分钟.Trash下的CheckPoint目录会被删除,该配置服务器设置优先级大于客户端,默认:0 不删除 -->
                <name>fs.trash.interval</name>
                <value>1440</value>
        </property>

         <!--指定hadoop临时目录, hadoop.tmp.dir 是hadoop文件系统依赖的基础配置,很多路径都依赖它。如果hdfs-site.xml中不配 置namenode和datanode的存放位置,默认就放在这>个路径中 -->
        <property>   
                <name>hadoop.tmp.dir</name>
                <value>/home/hadoop/app/hadoop-2.6.0-cdh5.7.0/tmp</value>
        </property>

         <!-- 指定zookeeper地址 -->
        <property>
                <name>ha.zookeeper.quorum</name>
                <value>hadoop000:2181,hadoop001:2181,hadoop002:2181</value>
        </property>
         <!--指定ZooKeeper超时间隔,单位毫秒 -->
        <property>
                <name>ha.zookeeper.session-timeout.ms</name>
                <value>2000</value>
        </property>

        <property>
           <name>hadoop.proxyuser.hadoop.hosts</name>
           <value>*</value> 
        </property> 
        <property> 
            <name>hadoop.proxyuser.hadoop.groups</name> 
            <value>*</value> 
       </property> 


      <property>
          <name>io.compression.codecs</name>
          <value>org.apache.hadoop.io.compress.GzipCodec,
            org.apache.hadoop.io.compress.DefaultCodec,
            org.apache.hadoop.io.compress.BZip2Codec,
            org.apache.hadoop.io.compress.SnappyCodec
          </value>
      </property>
</configuration>
3) 修改hdfs-site.xml

[hadoop@hadoop000 hadoop]$ vi hdfs-site.xml

<configuration>
    <!--HDFS超级用户 -->
    <property>
        <name>dfs.permissions.superusergroup</name>
        <value>hadoop</value>
    </property>

    <!--开启web hdfs -->
    <property>
        <name>dfs.webhdfs.enabled</name>
        <value>true</value>
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>/home/hadoop/app/hadoop-2.6.0-cdh5.7.0/data/dfs/name</value>
        <description> namenode 存放name table(fsimage)本地目录(需要修改)</description>
    </property>
    <property>
        <name>dfs.namenode.edits.dir</name>
        <value>${dfs.namenode.name.dir}</value>
        <description>namenode粗放 transaction file(edits)本地目录(需要修改)</description>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>/home/hadoop/app/hadoop-2.6.0-cdh5.7.0/data/dfs/data</value>
        <description>datanode存放block本地目录(需要修改)</description>
    </property>
    <property>
        <name>dfs.replication</name>
        <value>3</value>
    </property>
    <!-- 块大小256M (默认128M) -->
    <property>
        <name>dfs.blocksize</name>
        <value>268435456</value>
    </property>
    <!--======================================================================= -->
    <!--HDFS高可用配置 -->
    <!--指定hdfs的nameservice为ruozeclusterg5,需要和core-site.xml中的保持一致 -->
    <property>
        <name>dfs.nameservices</name>
        <value>ruozeclusterg5</value>
    </property>
    <property>
        <!--设置NameNode IDs 此版本最大只支持两个NameNode -->
        <name>dfs.ha.namenodes.ruozeclusterg5</name>
        <value>nn1,nn2</value>
    </property>

    <!-- Hdfs HA: dfs.namenode.rpc-address.[nameservice ID] rpc 通信地址 -->
    <property>
        <name>dfs.namenode.rpc-address.ruozeclusterg5.nn1</name>
        <value>hadoop001:8020</value>
    </property>
    <property>
        <name>dfs.namenode.rpc-address.ruozeclusterg5.nn2</name>
        <value>hadoop002:8020</value>
    </property>

    <!-- Hdfs HA: dfs.namenode.http-address.[nameservice ID] http 通信地址 -->
    <property>
        <name>dfs.namenode.http-address.ruozeclusterg5.nn1</name>
        <value>hadoop001:50070</value>
    </property>
    <property>
        <name>dfs.namenode.http-address.ruozeclusterg5.nn2</name>
        <value>hadoop002:50070</value>
    </property>

    <!--==================Namenode editlog同步 ============================================ -->
    <!--保证数据恢复 -->
    <property>
        <name>dfs.journalnode.http-address</name>
        <value>0.0.0.0:8480</value>
    </property>
    <property>
        <name>dfs.journalnode.rpc-address</name>
        <value>0.0.0.0:8485</value>
    </property>
    <property>
        <!--设置JournalNode服务器地址,QuorumJournalManager 用于存储editlog -->
        <!--格式:qjournal://<host1:port1>;<host2:port2>;<host3:port3>/<journalId> 端口同journalnode.rpc-address -->
        <name>dfs.namenode.shared.edits.dir</name>
        <value>qjournal://hadoop001:8485;hadoop002:8485;hadoop003:8485/ruozeclusterg5</value>
    </property>

    <property>
        <!--JournalNode存放数据地址 -->
        <name>dfs.journalnode.edits.dir</name>
        <value>/home/hadoop/app/hadoop-2.6.0-cdh5.7.0/data/dfs/jn</value>
    </property>
    <!--==================DataNode editlog同步 ============================================ -->
    <property>
        <!--DataNode,Client连接Namenode识别选择Active NameNode策略 -->
                             <!-- 配置失败自动切换实现方式 -->
        <name>dfs.client.failover.proxy.provider.ruozeclusterg5</name>
        <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
    </property>
    <!--==================Namenode fencing:=============================================== -->
    <!--Failover后防止停掉的Namenode启动,造成两个服务 -->
    <property>
        <name>dfs.ha.fencing.methods</name>
        <value>sshfence</value>
    </property>
    <property>
        <name>dfs.ha.fencing.ssh.private-key-files</name>
        <value>/home/hadoop/.ssh/id_rsa</value>
    </property>
    <property>
        <!--多少milliseconds 认为fencing失败 -->
        <name>dfs.ha.fencing.ssh.connect-timeout</name>
        <value>30000</value>
    </property>

    <!--==================NameNode auto failover base ZKFC and Zookeeper====================== -->
    <!--开启基于Zookeeper  -->
    <property>
        <name>dfs.ha.automatic-failover.enabled</name>
        <value>true</value>
    </property>
    <!--动态许可datanode连接namenode列表 -->
     <property>
       <name>dfs.hosts</name>
       <value>/home/hadoop/app/hadoop-2.6.0-cdh5.7.0/etc/hadoop/slaves</value>
     </property>
</configuration>
4)修改mapred-site.xml

[hadoop@hadoop000 hadoop]$ vi mapred-site.xml

<configuration>
    <!-- 配置 MapReduce Applications -->
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <!-- JobHistory Server ============================================================== -->
    <!-- 配置 MapReduce JobHistory Server 地址 ,默认端口10020 -->
    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>hadoop001:10020</value>
    </property>
    <!-- 配置 MapReduce JobHistory Server web ui 地址, 默认端口19888 -->
    <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>hadoop001:19888</value>
    </property>

<!-- 配置 Map段输出的压缩,snappy-->
  <property>
      <name>mapreduce.map.output.compress</name> 
      <value>true</value>
  </property>

  <property>
      <name>mapreduce.map.output.compress.codec</name> 
      <value>org.apache.hadoop.io.compress.SnappyCodec</value>
   </property>

</configuration>
5) 修改yarn-site.xml

[hadoop@hadoop000 hadoop]$ vi yarn-site.xml

<configuration>
    <!-- nodemanager 配置 ================================================= -->
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
    <property>
        <name>yarn.nodemanager.localizer.address</name>
        <value>0.0.0.0:23344</value>
        <description>Address where the localizer IPC is.</description>
    </property>
    <property>
        <name>yarn.nodemanager.webapp.address</name>
        <value>0.0.0.0:23999</value>
        <description>NM Webapp address.</description>
    </property>

    <!-- HA 配置 =============================================================== -->
    <!-- Resource Manager Configs -->
    <property>
        <name>yarn.resourcemanager.connect.retry-interval.ms</name>
        <value>2000</value>
    </property>
    <property>
        <name>yarn.resourcemanager.ha.enabled</name>
        <value>true</value>
    </property>
    <property>
        <name>yarn.resourcemanager.ha.automatic-failover.enabled</name>
        <value>true</value>
    </property>
    <!-- 使嵌入式自动故障转移。HA环境启动,与 ZKRMStateStore 配合 处理fencing -->
    <property>
        <name>yarn.resourcemanager.ha.automatic-failover.embedded</name>
        <value>true</value>
    </property>
    <!-- 集群名称,确保HA选举时对应的集群 -->
    <property>
        <name>yarn.resourcemanager.cluster-id</name>
        <value>yarn-cluster</value>
    </property>
    <property>
        <name>yarn.resourcemanager.ha.rm-ids</name>
        <value>rm1,rm2</value>
    </property>


    <!--这里RM主备结点需要单独指定,(可选)
    <property>
         <name>yarn.resourcemanager.ha.id</name>
         <value>rm2</value>
     </property>
     -->

    <property>
        <name>yarn.resourcemanager.scheduler.class</name>
        <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
    </property>
    <property>
        <name>yarn.resourcemanager.recovery.enabled</name>
        <value>true</value>
    </property>
    <property>
        <name>yarn.app.mapreduce.am.scheduler.connection.wait.interval-ms</name>
        <value>5000</value>
    </property>
    <!-- ZKRMStateStore 配置 -->
    <property>
        <name>yarn.resourcemanager.store.class</name>
        <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
    </property>
    <property>
        <name>yarn.resourcemanager.zk-address</name>
        <value>hadoop000:2181,hadoop001:2181,hadoop002:2181</value>
    </property>
    <property>
        <name>yarn.resourcemanager.zk.state-store.address</name>
        <value>hadoop000:2181,hadoop001:2181,hadoop002:2181</value>
    </property>
    <!-- Client访问RM的RPC地址 (applications manager interface) -->
    <property>
        <name>yarn.resourcemanager.address.rm1</name>
        <value>hadoop001:23140</value>
    </property>
    <property>
        <name>yarn.resourcemanager.address.rm2</name>
        <value>hadoop002:23140</value>
    </property>
    <!-- AM访问RM的RPC地址(scheduler interface) -->
    <property>
        <name>yarn.resourcemanager.scheduler.address.rm1</name>
        <value>hadoop001:23130</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address.rm2</name>
        <value>hadoop002:23130</value>
    </property>
    <!-- RM admin interface -->
    <property>
        <name>yarn.resourcemanager.admin.address.rm1</name>
        <value>hadoop001:23141</value>
    </property>
    <property>
        <name>yarn.resourcemanager.admin.address.rm2</name>
        <value>hadoop002:23141</value>
    </property>
    <!--NM访问RM的RPC端口 -->
    <property>
        <name>yarn.resourcemanager.resource-tracker.address.rm1</name>
        <value>hadoop001:23125</value>
    </property>
    <property>
        <name>yarn.resourcemanager.resource-tracker.address.rm2</name>
        <value>hadoop002:23125</value>
    </property>
    <!-- RM web application 地址 -->
    <property>
        <name>yarn.resourcemanager.webapp.address.rm1</name>
        <value>hadoop001:8088</value>
    </property>
    <property>
        <name>yarn.resourcemanager.webapp.address.rm2</name>
        <value>hadoop002:8088</value>
    </property>
    <property>
        <name>yarn.resourcemanager.webapp.https.address.rm1</name>
        <value>hadoop001:23189</value>
    </property>
    <property>
        <name>yarn.resourcemanager.webapp.https.address.rm2</name>
        <value>hadoop002:23189</value>
    </property>

    <property>
       <name>yarn.log-aggregation-enable</name>
       <value>true</value>
    </property>
    <property>
         <name>yarn.log.server.url</name>
         <value>https://hadoop001:19888/jobhistory/logs</value>
    </property>


    <property>
        <name>yarn.nodemanager.resource.memory-mb</name>
        <value>2048</value>
    </property>
    <property>
        <name>yarn.scheduler.minimum-allocation-mb</name>
        <value>1024</value>
        <discription>单个任务可申请最少内存,默认1024MB</discription>
     </property>


  <property>
    <name>yarn.scheduler.maximum-allocation-mb</name>
    <value>2048</value>
    <discription>单个任务可申请最大内存,默认8192MB</discription>
  </property>

   <property>
       <name>yarn.nodemanager.resource.cpu-vcores</name>
       <value>2</value>
    </property>

</configuration>

6) 修改 slaves

[hadoop@hadoop000 hadoop]$ vim slaves

hadoop000
hadoop001
hadoop002
7) 拷贝hadoop到其他节点
[hadoop@hadoop000 app]$ scp -r hadoop-2.6.0-cdh5.7.0/ hadoop@hadoop001:/home/hadoop/app/
[hadoop@hadoop000 app]$ scp -r hadoop-2.6.0-cdh5.7.0/ hadoop@hadoop002:/home/hadoop/app/
8) 配置Hadoop环境变量
[hadoop@hadoop000 app]$ vim ~/.bash_profile
export HADOOP_HOME=/home/hadoop/app/hadoop-2.6.0-cdh5.7.0
export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
[hadoop@hadoop000 app]$ source ~/.bash_profile
[hadoop@hadoop001 app]$ vim ~/.bash_profile
export HADOOP_HOME=/home/hadoop/app/hadoop-2.6.0-cdh5.7.0
export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
[hadoop@hadoop001 app]$ source ~/.bash_profile
[hadoop@hadoop002 app]$ vim ~/.bash_profile
export HADOOP_HOME=/home/hadoop/app/hadoop-2.6.0-cdh5.7.0
export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
[hadoop@hadoop002 app]$ source ~/.bash_profile

3 启动集群

1 在各个JournalNode节点上,输入以下命令启动journalnode服务:(前提zookeeper集群已启动)

[hadoop@hadoop000 app]$ hadoop-daemon.sh start journalnode
[hadoop@hadoop001 app]$ hadoop-daemon.sh start journalnode
[hadoop@hadoop002 app]$ hadoop-daemon.sh start journalnode

2 在[nn1]上,对namenode进行格式化,并启动:

[hadoop@hadoop000 app]$ hdfs namenode -format

启动nn1上namenode

[hadoop@hadoop000 app]$ hadoop-daemon.sh  start namenode
starting namenode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-namenode-hadoop000.out

3 在[nn2]上,同步nn1的元数据信息:

[hadoop@hadoop001 hadoop-2.6.0-cdh5.7.0]$ hdfs namenode -bootstrapStandby

4 启动[nn2]:

[hadoop@hadoop001 hadoop-2.6.0-cdh5.7.0]$ hadoop-daemon.sh start namenode
starting namenode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-namenode-hadoop001.out

5 在[nn1]上,启动所有datanode

[hadoop@hadoop000 app]$ hadoop-daemons.sh start datanode
hadoop000: starting datanode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-datanode-hadoop000.out
hadoop002: starting datanode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-datanode-hadoop002.out
hadoop001: starting datanode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-datanode-hadoop001.out

6 查看web页面此时显示

先在windows中的hosts文件添加

192.168.142.150 hadoop000
192.168.142.151 hadoop001
192.168.142.152 hadoop002

https://hadoop000:50070

7 手动切换状态,在各个NameNode节点上启动DFSZK Failover Controller,先在哪台机器启动,哪个机器的NameNode就是Active NameNode

首先,对ZKFC进行初始化

[hadoop@hadoop000 hadoop]$ hdfs zkfc -formatZK
[hadoop@hadoop000 app]$ hadoop-daemon.sh start zkfc
starting zkfc, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-zkfc-hadoop000.out
[hadoop@hadoop001 hadoop-2.6.0-cdh5.7.0]$ hadoop-daemon.sh start zkfc
starting zkfc, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-zkfc-hadoop001.out

或者强制手动其中一个节点变为Active(当两个NN都处于standby时才起作用,否则提示nn1已经处于active状态)
[hadoop@hadoop000 hadoop]$ hdfs haadmin -transitionToActive nn2 --forcemanual

当hadoop000处于active状态时

我们查看zookeeper中创建的目录
[hadoop@hadoop000 hadoop]$ zkCli.sh

8 启动hdfs服务,查看namenode状态

[hadoop@hadoop000 hadoop]$ start-dfs.sh
18/11/28 23:47:27 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [hadoop000 hadoop001]
hadoop000: starting namenode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-namenode-hadoop000.out
hadoop001: starting namenode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-namenode-hadoop001.out
hadoop000: starting datanode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-datanode-hadoop000.out
hadoop002: starting datanode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-datanode-hadoop002.out
hadoop001: starting datanode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-datanode-hadoop001.out
Starting journal nodes [hadoop000 hadoop001 hadoop002]
hadoop000: starting journalnode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-journalnode-hadoop000.out
hadoop002: starting journalnode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-journalnode-hadoop002.out
hadoop001: starting journalnode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-journalnode-hadoop001.out
18/11/28 23:47:44 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting ZK Failover Controllers on NN hosts [hadoop000 hadoop001]
hadoop001: starting zkfc, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-zkfc-hadoop001.out
hadoop000: starting zkfc, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-zkfc-hadoop000.out

启动顺序:
namenode -> datanode -> journalnode -> zkfc

9 启动yarn

9.1在hadoop001中运行start-yarn.sh
[hadoop@hadoop001 hadoop-2.6.0-cdh5.7.0]$ start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/yarn-hadoop-resourcemanager-hadoop001.out
hadoop000: starting nodemanager, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/yarn-hadoop-nodemanager-hadoop000.out
hadoop002: starting nodemanager, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/yarn-hadoop-nodemanager-hadoop002.out
hadoop001: starting nodemanager, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/yarn-hadoop-nodemanager-hadoop001.out
9.2在hadoop002中启动第二个resourcemanager实现yarn的高可用
[hadoop@hadoop002 data]$ yarn-daemon.sh start resourcemanager
starting resourcemanager, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/yarn-hadoop-resourcemanager-hadoop002.out
9.3 查看RM状态
[hadoop@hadoop001 hadoop-2.6.0-cdh5.7.0]$ yarn rmadmin -getServiceState rm1
18/11/28 23:54:09 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
active
[hadoop@hadoop001 hadoop-2.6.0-cdh5.7.0]$ yarn rmadmin -getServiceState rm2
18/11/28 23:55:12 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
standby
9.4 查看web服务

active RM https://hadoop001:8088/

https://hadoop001:8088/cluster/cluster

https://hadoop002:8088/cluster/cluster


codeobj , 版权所有丨如未注明 , 均为原创丨本网站采用BY-NC-SA协议进行授权
转载请注明原文链接:【完全分布式Hadoop】(九)高可用hadoop集群安装(HDFS HA,Yarn HA)
喜欢 (0)
[a37free@163.com]
分享 (0)
发表我的评论
取消评论
表情 贴图 加粗 删除线 居中 斜体 签到

Hi,您需要填写昵称和邮箱!

  • 昵称 (必填)
  • 邮箱 (必填)
  • 网址