修改系统hostname
1.修改network文件
1 2 3 4 |
vi /etc/sysconfig/network //增加如下两行 NETWORKING=yes HOSTNAME=node01 |
2.修改hosts
1 2 3 |
vi /etc/hosts //增加一行 127.0.0.1 localhost localhost.node01 |
3.修改hostname文件
1 2 3 |
vi /etc/hostname //内容改为 node01 |
完成后重启服务器
安装jdk
1.下载解压
下载地址jdk1.8.0-201
解压
1 |
tar -zxvf jdk-8u201-linux-x64.tar.gz |
2.配置环境变量
1 2 3 4 |
vi /etc/profile //增加两行 export JAVA_HOME=/root/jdk1.8.0_201 export PATH=$PATH:$JAVA_HOME/bin |
配置好后输入代码,显示如下图即为成功
1 2 |
source /etc/profile java -version |
安装hadoop
1.下载解压hadoop
然后解压
1 |
tar -zxvf hadoop-3.0.2.tar.gz |
2.配置文件
进入hadoop目录/etc/hadoop,配置以下配置文件
1.配置hadoop-env.sh
1 |
vi hadoop-env.sh |
修改 JAVA_HOME路径为java 安装路径
2.配置core-site.xml
1 |
vi core-site.xml |
.配置默认采用的文件系统。
1 2 3 4 5 6 7 8 |
<property> <name>fs.defaultFS</name> <value>hdfs://node01:9000/</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/home/thousfeet/app/hadoop-3.0.0/data/</value> </property> |
3.配置hafs-site.xml
1 2 3 4 5 6 |
vi hdfs-site.xml //内容为 <property> <name>dfs.replication</name> <value>1</value> </property> |
4.配置 mapred-site.xml
1 2 3 4 5 6 7 |
vi mapred-site.xml <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> |
5.配置 yarn-site.xml
1 2 3 4 5 6 7 8 9 10 |
vi yarn-site.xml <property> <name>yarn.resourcemanager.hostname</name> <value>node01</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> |
6.配置环境变量
1 2 3 4 5 6 7 |
vi /etc/profile //加入一行 export HADOOP_HOME=/root/hadoop-3.0.2 //修改PATH为 export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin //更新文件 source /etc/profile |
SSH免密登陆
1 2 3 4 5 |
ssh-keygen -t rsa cd .ssh touch authorized_keys chmod 600 authorized_keys cat id_rsa.pub >> authorized_keys |
完成
然后尝试能否成功登陆
1 |
ssh node01 |
启动Hadoop
1.初始化
1 |
hadoop namenode -format |
2.启动
1 2 |
start-dfs.sh start-yarn.sh |
如报错请点开
keyboard_arrow_down
以下错误
1 2 3 4 5 6 7 8 |
ERROR: Attempting to launch hdfs namenode as root ERROR: but there is no HDFS_NAMENODE_USER defined. Aborting launch. Starting datanodes ERROR: Attempting to launch hdfs datanode as root ERROR: but there is no HDFS_DATANODE_USER defined. Aborting launch. Starting secondary namenodes [localhost.localdomain] ERROR: Attempting to launch hdfs secondarynamenode as root ERROR: but there is no HDFS_SECONDARYNAMENODE_USER defined. Aborting launch. |
(缺少用户定义而造成的)因此编辑启动和关闭
修改 sbin/start-dfs.sh和stop-dfs.sh文件
顶部加入
1 2 3 4 |
HDFS_DATANODE_USER=root HDFS_DATANODE_SECURE_USER=hdfs HDFS_NAMENODE_USER=root HDFS_SECONDARYNAMENODE_USER=root |
以下错误
1 2 3 |
Starting resourcemanager ERROR: Attempting to launch yarn resourcemanager as root ERROR: but there is no YARN_RESOURCEMANAGER_USER defined. Aborting launch. |
是因为缺少用户定义造成的,所以分别编辑开始和关闭脚本
修改 sbin/start-yarn.sh和stop-yarn.sh文件
顶部添加
1 2 3 |
YARN_RESOURCEMANAGER_USER=root HADOOP_SECURE_DN_USER=yarn YARN_NODEMANAGER_USER=root |
完成后输入 jps 查看进程
发表评论