亚洲最大看欧美片,亚洲图揄拍自拍另类图片,欧美精品v国产精品v呦,日本在线精品视频免费

  • 站長(zhǎng)資訊網(wǎng)
    最全最豐富的資訊網(wǎng)站

    Hadoop2.5.0偽分布式環(huán)境搭建

    本章主要介紹下在Linux系統(tǒng)下的Hadoop2.5.0偽分布式環(huán)境搭建步驟。首先要搭建Hadoop偽分布式環(huán)境,需要完成一些前置依賴工作,包括創(chuàng)建用戶、安裝JDK、關(guān)閉防火墻等。

    一、創(chuàng)建hadoop用戶

    使用root賬戶創(chuàng)建hadoop用戶,為了在實(shí)驗(yàn)環(huán)境下便于操作,賦予hadoop用戶sudo權(quán)限。具體操作代碼如下:

    useradd hadoop # 添加hadoop用戶
    passwd hadoop # 設(shè)置密碼
    visudo
    hadoop ALL=(root)NOPASSWD:ALL

    二、Hadoop偽分布式環(huán)境搭建

    1、關(guān)閉Linux中的防火墻和selinux

    禁用selinux,代碼如下:

    sudo vi /etc/sysconfig/selinux # 打開selinux配置文件
    SELINUX=disabled # 修改SELINUX屬性值為disabled

    關(guān)閉防火墻,代碼如下:

    sudo service iptables status # 查看防火墻狀態(tài)
    sudo service iptables stop # 關(guān)閉防火墻
    sudo chkconfig iptables off # 關(guān)閉防火墻開機(jī)啟動(dòng)設(shè)置

    2、安裝jdk

    首先,查看系統(tǒng)中是否有安裝自帶的jdk,如果存在,則先卸載,代碼如下:

    rpm -qa | grep java # 查看是否有安裝jdk
    sudo rpm -e –nodeps java-1.6.0-openjdk-1.6.0.0-1.50.1.11.5.el6_3.x86_64 tzdata-java-2012j-1.el6.noarch java-1.7.0-openjdk-1.7.0.9-2.3.4.1.el6_3.x86_64 # 卸載自帶jdk

    接著,安裝jdk,步驟如下:

    step1.解壓安裝包:

    tar -zxf jdk-7u67-linux-x64.tar.gz -C /usr/local/

    step2.配置環(huán)境變量及檢查是否安裝成功:

    sudo vi /etc/profile # 打開profile文件
    ##JAVA_HOME
    export JAVA_HOME=/usr/local/jdk1.7.0_67
    export PATH=$PATH:$JAVA_HOME/bin

    # 生效文件
    source /etc/profile # 使用root用戶操作

    # 查看是否配置成功
    java -version

    3、安裝hadoop

    step1:解壓hadoop安裝包

    tar -zxvf /opt/software/hadoop-2.5.0.tar.gz -C /opt/software/

    建議:將/opt/software/hadoop-2.5.0/share下的doc目錄刪除。

    step2:修改etc/hadoop目錄下hadoop-env.sh、mapred-env.sh、yarn-env.sh三個(gè)配置文件中的JAVA_HOME

    export JAVA_HOME=/usr/local/jdk1.7.0_67

    step3:修改core-site.xml

    <?xml version=”1.0″ encoding=”UTF-8″?>
    <?xml-stylesheet type=”text/xsl” href=”configuration.xsl”?>
    <!–
      Licensed under the Apache License, Version 2.0 (the “License”);
      you may not use this file except in compliance with the License.
      You may obtain a copy of the License at

        http://www.apache.org/licenses/LICENSE-2.0

      Unless required by applicable law or agreed to in writing, software
      distributed under the License is distributed on an “AS IS” BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
      See the License for the specific language governing permissions and
      limitations under the License. See accompanying LICENSE file.
    –>

    <!– Put site-specific property overrides in this file. –>

    <configuration>
        <property>
            <name>name</name>
            <value>my-study-cluster</value>
        </property>
        <property>
            <name>fs.defaultFS</name>
            <value>hdfs://bigdata01:8020</value>
        </property>
            <!– 指定Hadoop系統(tǒng)生成文件的臨時(shí)目錄地址 –>
        <property>
            <name>hadoop.tmp.dir</name>
            <value>/opt/software/hadoop-2.5.0/data/tmp</value>
        </property>
        <property>
            <name>fs.trash.interval</name>
            <value>1440</value>
        </property>
        <property>
            <name>hadoop.http.staticuser.user</name>
            <value>hadoop</value>
        </property>
            <property>
                    <name>hadoop.proxyuser.hadoop.hosts</name>
                    <value>bigdata01</value>
            </property>
            <property>
                    <name>hadoop.proxyuser.hadoop.groups</name>
                    <value>*</value>
            </property>
    </configuration>

    step4:修改hdfs-site.xml

    <?xml version=”1.0″ encoding=”UTF-8″?>
    <?xml-stylesheet type=”text/xsl” href=”configuration.xsl”?>
    <!–
      Licensed under the Apache License, Version 2.0 (the “License”);
      you may not use this file except in compliance with the License.
      You may obtain a copy of the License at

        http://www.apache.org/licenses/LICENSE-2.0

      Unless required by applicable law or agreed to in writing, software
      distributed under the License is distributed on an “AS IS” BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
      See the License for the specific language governing permissions and
      limitations under the License. See accompanying LICENSE file.
    –>

    <!– Put site-specific property overrides in this file. –>

    <configuration>
        <property>
            <name>dfs.replication</name>
            <value>1</value>
        </property>
        <property>
            <name>dfs.permissions.enabled</name>
            <value>false</value>
        </property>
        <property>
            <name>dfs.namenode.name.dir</name>
            <value>/opt/software/hadoop-2.5.0/data/name</value>
        </property>
        <property>
            <name>dfs.datanode.data.dir</name>
            <value>/opt/software/hadoop-2.5.0/data/data</value>
        </property>
    </configuration>

    step5:修改mapred-site.xml

    <?xml version=”1.0″?>
    <?xml-stylesheet type=”text/xsl” href=”configuration.xsl”?>
    <!–
      Licensed under the Apache License, Version 2.0 (the “License”);
      you may not use this file except in compliance with the License.
      You may obtain a copy of the License at

        http://www.apache.org/licenses/LICENSE-2.0

      Unless required by applicable law or agreed to in writing, software
      distributed under the License is distributed on an “AS IS” BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
      See the License for the specific language governing permissions and
      limitations under the License. See accompanying LICENSE file.
    –>

    <!– Put site-specific property overrides in this file. –>

    <configuration>
        <property>
            <name>mapreduce.framework.name</name>
            <value>yarn</value>
        </property>
        <property>
            <name>mapreduce.jobhistory.address</name>
            <value>bigdata01:10020</value>
        </property>
        <property>
            <name>mapreduce.jobhistory.webapp.address</name>
            <value>bigdata01:19888</value>
        </property>
    </configuration>

    step6:修改yarn-site.xml

    <?xml version=”1.0″?>
    <!–
      Licensed under the Apache License, Version 2.0 (the “License”);
      you may not use this file except in compliance with the License.
      You may obtain a copy of the License at

        http://www.apache.org/licenses/LICENSE-2.0

      Unless required by applicable law or agreed to in writing, software
      distributed under the License is distributed on an “AS IS” BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
      See the License for the specific language governing permissions and
      limitations under the License. See accompanying LICENSE file.
    –>
    <configuration>

    <!– Site specific YARN configuration properties –>

        <property>
            <name>yarn.nodemanager.aux-services</name>
            <value>mapreduce_shuffle</value>
        </property>
        <property>
            <name>yarn.resourcemanager.hostname</name>
            <value>bigdata01</value>
        </property>
        <property>
            <name>yarn.log-aggregation-enable</name>
            <value>true</value>
        </property>
        <property>
            <name>yarn.log-aggregation.retain-seconds</name>
            <value>106800</value>
        </property>
        <property>
            <name>yarn.log.server.url</name>
            <value>http://bigdata01:19888/jobhistory/job/</value>
        </property>
    </configuration>

    step7:修改slaves文件

    bigdata01

    step8:格式化namenode

    bin/hdfs namenode -format

    step9:?jiǎn)?dòng)進(jìn)程

    ## 方式一:?jiǎn)为?dú)啟動(dòng)一個(gè)進(jìn)程
    # 啟動(dòng)namenode
    sbin/hadoop-daemon.sh start namenode
    # 啟動(dòng)datanode
    sbin/hadoop-daemon.sh start datanode
    # 啟動(dòng)resourcemanager
    sbin/yarn-daemon.sh start resourcemanager
    # 啟動(dòng)nodemanager
    sbin/yarn-daemon.sh start nodemanager
    # 啟動(dòng)secondarynamenode
    sbin/hadoop-daemon.sh start secondarynamenode
    # 啟動(dòng)歷史服務(wù)器
    sbin/mr-jobhistory-daemon.sh start historyserver

    ## 方式二:
    sbin/start-dfs.sh # 啟動(dòng)namenode、datanode、secondarynamenode
    sbin/start-yarn.sh # 啟動(dòng)resourcemanager、nodemanager
    sbin/mr-jobhistory-daemon.sh start historyserver # 啟動(dòng)歷史服務(wù)器

    step10:檢查

    1.通過(guò)瀏覽器訪問(wèn)HDFS的外部UI界面,加上外部交互端口號(hào):50070

      http://bigdata01:50070

    2.通過(guò)瀏覽器訪問(wèn)YARN的外部UI界面,加上外部交互端口號(hào):8088

      http://bigdata01:8088

    3.執(zhí)行Wordcount程序

      bin/yarn jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.0.jar wordcount input output

      注:輸入輸出目錄自定義

    結(jié)束!

    以上為Hadoop2.5.0偽分布式環(huán)境搭建步驟,如有問(wèn)題,請(qǐng)指出,謝謝!

    贊(0)
    分享到: 更多 (0)
    網(wǎng)站地圖   滬ICP備18035694號(hào)-2    滬公網(wǎng)安備31011702889846號(hào)