土地流转网站开发,京东网站建设的策划书,网站开发职位,外贸视频网站安装JDK#xff1a;
首先检查Java是否已经安装#xff1a;
java -version
如果没有安装#xff0c;点击链接https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html 并选择相应系统以及位数下载#xff08;本文选择jdk-8u381-linux-x64…安装JDK
首先检查Java是否已经安装
java -version
如果没有安装点击链接https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html 并选择相应系统以及位数下载本文选择jdk-8u381-linux-x64.tar.gz如具体版本不同则灵活修改
为其单独创立一个文件夹然后将其放到该目录下下载后以具体为止为准
sudo mkdir -p /usr/local/java
sudo mv ~/Downloads/jdk-8u381-linux-x64.tar.gz /usr/local/java/
进入该目录进行解压
cd /usr/local/java
sudo tar xvzf jdk-8u381-linux-x64.tar.gz
解压成功后会在当前目录下看到jdk1.8.0_381安装包然后删除安装包
sudo rm jdk-8u381-linux-x64.tar.gz
配置JDK
设置环境变量打开环境变量的配置文件
sudo vim /etc/profile
在末尾添加
JAVA_HOME/usr/local/java/jdk1.8.0_381
PATH$PATH:$HOME/bin:$JAVA_HOME/bin
export JAVA_HOME
export PATH
告诉linux Java JDK的位置并设置为默认模式
sudo update-alternatives --install /usr/bin/java java /usr/local/java/jdk1.8.0_381/bin/java 1
sudo update-alternatives --install /usr/bin/javac javac /usr/local/java/jdk1.8.0_381/bin/javac 1
sudo update-alternatives --install /usr/bin/javaws javaws /usr/local/java/jdk1.8.0_381/bin/javaws 1
sudo update-alternatives --set java /usr/local/java/jdk1.8.0_381/bin/java
sudo update-alternatives --set javac /usr/local/java/jdk1.8.0_381/bin/javac
sudo update-alternatives --set javaws /usr/local/java/jdk1.8.0_381/bin/javaws
重新加载环境变量的配置文件
source /etc/profile
检测Java版本
java -version
如果出现以下代表成功
java version 1.8.0_381
Java(TM) SE Runtime Environment (build 1.8.0_381-b07)
Java HotSpot(TM) 64-Bit Server VM (build 25.381-b07, mixed mode)
安装Hadoop
进入镜像文件https://mirrors.cnnic.cn/apache/hadoop/common/ 选择对应Hadoop版本本文选择hadoop-3.3.6.tar.gz
然后将其解压至刚刚创建的文件夹 /usr/local并删除安装包
sudo tar -zxf ~/Downloads/hadoop-3.3.6.tar.gz -C /usr/local
rm ~/Downloads/hadoop-3.3.6.tar.gz
重命名文件夹并修改权限其中phenix为用户名
cd /usr/local/
sudo mv hadoop-3.3.6 hadoop
sudo chown -R phenix ./hadoop
检测hadoop版本
/usr/local/hadoop/bin/hadoop version
出现以下信息则代表成功
Hadoop 3.3.6
Subversion ssh://git.corp.linkedin.com:29418/hadoop/hadoop.git -r e2f1f118e465e787d8567dfa6e2f3b72a0eb9194
From source with checksum 7b2d8877c5ce8c9a2cca5c7e81aa4026
This command was run using /usr/local/hadoop/share/hadoop/common/hadoop-common-3.3.6.jar
配置Hadoop伪分布式
切换到路径/usr/local/hadoop/etc/hadoop下需要修改2个配置文件core-site.xml和hdfs-site.xml。
首先打开core-site.xml
cd /usr/local/hadoop/etc/hadoop
vim core-site.xml
在configuration/configuration中添加如下配置
configurationpropertynamehadoop.tmp.dir/namevalue/usr/local/hadoop/tmp/valuedescriptionAbase for other temporary directories./description/propertypropertynamefs.defaultFS/namevaluehdfs://localhost:9000/value/property
/configuration
注本文使用的是hdfs://localhost:9000即hdfs文件系统
再打开hdfs-site.xml
vim hdfs-site.xml
同样在configuration/configuration中添加如下配置
configurationpropertynamedfs.replication/namevalue1/value/propertypropertynamedfs.namenode.name.dir/namevalue/usr/local/hadoop/tmp/dfs/name/value/propertypropertynamedfs.datanode.data.dir/namevalue/usr/local/hadoop/tmp/dfs/data/value/property
/configuration
注dfs.replication就是指备份的份数dfs.namenode.name.dir和dfs.datanode.data.dir分别指名称节点和数据节点存储路径
切换回hadoop主目录并执行NameNode的格式化格式化成功后轻易不要再次格式化
cd /usr/local/hadoop
./bin/hdfs namenode -format
出现以下信息代表成功 00000000 using no compression
18/08/20 11:07:16 INFO namenode.FSImageFormatProtobuf: Image file /usr/local/hadoop/tmp/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 320 bytes saved in 0 seconds .
18/08/20 11:07:16 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid 0
18/08/20 11:07:16 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at phenix/127.0.1.1
************************************************************/
手动添加JAVA_HOME在hadoop-env.sh文件中添:
cd etc/hadoop/
vim hadoop-env.sh
在hadoop-env.sh文件中添加如下内容即可:
export JAVA_HOME/usr/local/java/jdk1.8.0_381
设置本机免密码登录不设置启动会报错Permission denied 切换到 ~/.ssh目录下
ssh-keygen -t rsa
# 一路回车yes
cat id_rsa.pub authorized_keys
# 将公钥追加到authorized_keys文件
chmod 600 authorized_keys
# 更改权限
开启NameNode和DataNode守护进程:
./sbin/start-dfs.sh
开启yarn资源管理器
./sbin/start-yarn.sh
验证
jps
出现以下六个则代表启动成功
18192 DataNode
18922 NodeManager
20044 Jps
18812 ResourceManager
18381 SecondaryNameNode
18047 NameNode
简单示例
首先切换至hadoop主目录并在HDFS中创建用户目录
./bin/hdfs dfs -mkdir -p /user/hadoop 创建输入文件夹
./bin/hdfs dfs -mkdir /user/hadoop/input
将etc/hadoop下所有的xml文件复制到输入
./bin/hdfs dfs -put ./etc/hadoop/*.xml /user/hadoop/input
然后通过命令查看
./bin/hdfs dfs -ls /user/hadoop/input
结果如下
Found 8 items
-rw-r--r-- 1 phenix supergroup 8814 2020-01-31 13:21 /user/hadoop/input/capacity-scheduler.xml
-rw-r--r-- 1 phenix supergroup 1119 2020-01-31 13:21 /user/hadoop/input/core-site.xml
-rw-r--r-- 1 phenix supergroup 10206 2020-01-31 13:21 /user/hadoop/input/hadoop-policy.xml
-rw-r--r-- 1 phenix supergroup 1173 2020-01-31 13:21 /user/hadoop/input/hdfs-site.xml
-rw-r--r-- 1 phenix supergroup 620 2020-01-31 13:21 /user/hadoop/input/httpfs-site.xml
-rw-r--r-- 1 phenix supergroup 3518 2020-01-31 13:21 /user/hadoop/input/kms-acls.xml
-rw-r--r-- 1 phenix supergroup 5939 2020-01-31 13:21 /user/hadoop/input/kms-site.xml
-rw-r--r-- 1 phenix supergroup 690 2020-01-31 13:21 /user/hadoop/input/yarn-site.xml
运行grep
./bin/hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.6.jar grep /user/hadoop/input output dfs[a-z]
查看运行结果
./bin/hdfs dfs -cat output/*
出现以下输出则说明Hadoop集群搭建完成
1 1 dfsadmin
我们还可以利用HDFS Web界面不过只能查看文件系统数据点击链接http://ip:9870即可进行查看