- 浏览: 472719 次
- 性别:
- 来自: 南阳
文章分类
最新评论
-
yuanhongb:
这么说来,感觉CGI和现在的JSP或ASP技术有点像啊
cgi -
draem0507:
放假了还这么勤啊
JXL操作Excel -
chenjun1634:
学习中!!
PHP/Java Bridge -
Jelen_123:
好文章,给了我好大帮助!多谢!
hadoop安装配置 ubuntu9.10 hadoop0.20.2 -
lancezhcj:
一直用job
Oracle存储过程定时执行2种方法(转)
并且已经安装成功Hadoop-0.20.2及以上版本。
安装包准备
需要安装包:
zookeeper-3.3.2.tar.gz(stable版本)
hbase-0.20.2.tar.gz(stable版本)
安装步骤
安装和配置ZooKeeper
HBase从0.20.0开始,需要首先安装ZooKeeper。从apache上下载zookeeper-3.2.1.tar.gz(Stable版本),解压到/home/hdfs/目录下。
(1)在namenode节点新建zookeeper目录,在该目录下新建myid文件。
(2)在zookeeper-3.2.1/conf目录下,拷贝zoo_sample.cfg为zoo.cfg。在zoo.cfg中将dataDir改为/home/hdfs/zookeeper,在文件末位添加所有的主机:
server.1=10.192.1.1:2888:3888
server.2=10.192.1.2:2888:3888
server.3=10.192.1.3:2888:3888
(3)用scp命令将namenode节点的的/home/hdfs/ zookeeper-3.2.1和/home/hdfs/ zookeeper拷贝到其余所有主机的/home/hdfs目录下。
(4)参照zoo.cfg中的配置,在各主机myid文件中写入各自的编号。如:10.192.1.1入1,10.192.1.2写入2
(5)在所有节点上执行bin/zkServer.sh start,分别启动。
执行bin/zkCli.sh -server xxx.xxx.xxx.xxx:2181,检查指定服务器是否成功启动。
vi /etc/profile
export HBASE_HOME=/hadoop/hbase
export PATH=$PATH:$HBASE_HOME/bin
export HADOOP_HOME=/hadoop/hadoop
export PATH=$PATH:$HADOOP_HOME/bin
安装和配置HBase
tar -zxvf hbase-0.20.6.tar.gz
cd hbase-0.20.6
mv * /hadoop/hbase
下载HBase0.20.1版本,解压到namenode节点的/home/hdfs目录下。
配置说明
(1)系统所有配置项的默认设置在hbase-default.xml中查看,如果需要修改配置项的值,在hbase-site.xml中添加配置项。
在分布式模式下安装HBase,需要添加的最基本的配置项如下:
<property>
<name>hbase.rootdir</name>
<value>hdfs://hadoop2-namenode:9000/hbase</value>
<description>The directory shared by region servers.</description>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
<description>The mode the cluster will be in. Possible values are
false: standalone and pseudo-distributed setups with managed Zookeeper
true: fully-distributed with unmanaged Zookeeper Quorum (see hbase-env.sh)
</description>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/hadoop/zookeeper</value>
<description>Property from ZooKeeper's config zoo.cfg.
The directory where the snapshot is stored.
</description>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2222</value>
<description>Property from ZooKeeper's config zoo.cfg.
The port at which the clients will connect.
</description>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>hadoop2-namenode,hadoop2-datanode1,hadoop2-datanode2</value>
<description>Comma separated list of servers in the ZooKeeper Quorum.
For example, "host1.mydomain.com,host2.mydomain.com,host3.mydomain.com".
By default this is set to localhost for local and pseudo-distributed modes
of operation. For a fully-distributed setup, this should be set to a full
list of ZooKeeper quorum servers. If HBASE_MANAGES_ZK is set in hbase-env.sh
this is the list of servers which we will start/stop ZooKeeper on.
</description>
</property>
(2)在conf/hbase-env.sh中修改添加配置项:
export JAVA_HOME=/usr/java/jdk1.6.0_22
export HBASE_MANAGES_ZK=true
export HADOOP_CONF_DIR=/hadoop/hadoop-config
HADOOP_CLASSPATH 要設hbase 的環境,而HBASE_CLASSPATH要設hadoop的環境,有了這行可以解決編譯hbase 程式時出現run time error
并把~/hadoop-0.20.1/conf/hdfs-site.xml拷贝至~/hbase-3.2.1/conf/目录下。
(3)将ZooKeeper的配置文件zoo.cfg添加到HBase所有主机的CLASSPATH中。
(4)在conf/regionservers中添加hadoop-0.20.1/conf/slaves中所有的datanode节点。
hadoop2-datanode1
hadoop2-datanode2
启动
Hadoop、ZooKeeper和HBase之间应该按照顺序启动和关闭:启动Hadoop—>启动ZooKeeper集群—>启动HBase—>停止HBase—>停止ZooKeeper集
群—>停止Hadoop。
在namenode节点执行bin/hbase-daemon.sh,启动master。执行bin/start-hbase.sh和bin/stop-hbase.sh 脚本启动和停止HBase服务。
/hadoop/hbase/bin/hbase-daemon.sh start master
/hadoop/hbase/bin/hbase-daemon.sh stop master
/hadoop/hbase/bin/start-hbase.sh
/hadoop/hbase/bin/stop-hbase.sh
/hadoop/hbase/bin/hbase shell
================================================
环境准备
1.在windows下安装VMware
2.创建了3个fedora14 linux。地址分别为:
m201 192.168.0.201 (Namenode)
s202 192.168.0.202 (Datanode)
s203 192.168.0.203 (Datanode)
3.在linux系统中下载所需要的软件。分别为:
jdk-6u23-linux-i586-rpm.bin
hadoop-0.20.2.tar.gz
zookeeper-3.3.3.tar.gz
hbase-0.90.2.tar.gz
将下载的软件保存到/root/install目录下。
安装jdk(s202,s203进行同样的操作)
1.执行jdk-6u23-linux-i586-rpm.bin就行可以。jdk将安装在/usr/java/jdk1.6.0_23目录下。
2.设置java环境变量,修改/etc/profile文件。在文件最后增加:
export JAVA_HOME=/usr/java/jdk1.6.0_23/
export JRE_HOME=/usr/java/jdk1.6.0_23/jre/
export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH
export PATH=$JAVA_HOME/bin:$PATH
3.使/etc/profile文件生效,执行这个文件。
设置ssh(使m201,可以不用密码访问s202和s203)
官网上的一段话:
Now check that you can ssh to the localhost without a passphrase:
$ ssh localhost
If you cannot ssh to localhost without a passphrase, execute the following commands:
$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
将m201上的id_dsa.pub 文件追加到s202和s203的authorized_keys文件内
安装hadoop
1.到/root/install目录解压hadoop-0.20.2.tar.gz,执行命令:tar -zxvf hadoop-0.20.2.tar.gz。运行结束后将生成hadoop-0.20.2目录
2。进入/root/install/hadoop-0.20.2/conf目录
3.修改文件masters(定义masters IP)
192.168.0.201
4.修改文件slaves(定义slaves IP)
192.168.0.202
192.168.0.203
5.修改文件hadoop-env.sh(设置jdk路径)
export JAVA_HOME=/usr/java/jdk1.6.0_23
6.修改文件core-site.xml在<configuration>中加入
<property>
<name>hadoop.tmp.dir</name>
<value>/hadoopdata</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://m201:9000</value>
<description>The name of the default file system. A URI whose
scheme and authority determine the FileSystem implementation. The
uri's scheme determines the config property (fs.SCHEME.impl) naming
the FileSystem implementation class. The uri's authority is used to
determine the host, port, etc. for a filesystem.</description>
</property>
7.修改文件hdfs-site.xml在<configuration>中加入
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
8.修改文件mapred-site.xml在<configuration>中加入
<property>
<name>mapred.job.tracker</name>
<value>m201:9001</value>
<description>The host and port that the MapReduce job tracker runs
at. If "local", then jobs are run in-process as a single map
and reduce task.
</description>
</property>
9.设置环境变量,修改文件/etc/profile
export HADOOP_HOME=/root/install/hadoop-0.20.2
export PATH=$HADOOP_HOME/bin:$PATH
s202,s203,也执行一样的操作
执行/etc/profile使其生效
10.配/etc/hosts文件,加入
192.168.0.201 m201
192.168.0.202 s202
192.168.0.203 s203
s202,s203,也执行一样的操作
11.将/root/install/hadoop-0.20.2目录复制到s202,s203上
可使用scp -r 源 主机:目标
11.格式化HDFS文件系统
/root/install/hadoop-0.20.2/bin/hadoop namenode –format命令
12. 执行/root/install/hadoop-0.20.2/bin/start-all.sh文件,启服务
/root/install/hadoop-0.20.2/bin/stop-all.sh文件,停止服务
hadoop安装完成
可运行
http://192.168.0.201:50070/dfshealth.jsp
查看hadoop是否运行
安装zookeeper(在m201上执行)
1.在/root/install/hadoop-0.20.2/中创建目录zookeeper
cd /root/install/hadoop-0.20.2
mkdir zookeeper
2.在/root/install目录中解压zookeeper
cd /root/install
tar -zxvf zookeeper-3.3.3.tar.gz
3.将zookeeper移动至/root/install/hadoop-0.20.2/zookeeper目录
cd /root/install/zookeeper
mv * /root/install/hadoop-0.20.2/zookeeper
3配置zookeeper
1).创建zoo.cfg文件
cd /root/install/hadoop-0.20.2/zookeeper/conf
cp zoo_sample.cfg zoo.cfg
2).修改zoo.cfg文件,zoo.cfg文件的完整内容如下:
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
dataDir=/root/install/hadoop-0.20.2/zookeeper/zookeeper-data #(新增加)
dataLogDir=/root/install/hadoop-0.20.2/zookeeper/logs #(新增加)
# the port at which the clients will connect
clientPort=2181
server.1=m201:2888:3888 #(新增加)
server.2=s202:2888:3888 #(新增加)
server.3=s203:2888:3888 #(新增加)
在文件中写入 #(新增加)的项目
3).创建zookeeper-data目录
cd /root/install/hadoop-0.20.2/zookeeper/
mkdir zookeeper-data
3).创建myid文件
cd /root/install/hadoop-0.20.2/zookeeper/zookeeper-data
vi myid
myid文件中的内空写:1
:x保存文件
4.将/root/install/hadoop-0.20.2/zookeeper目录复制到s202,s203上
可使用scp -r 源 主机:目标
5.进入s202主机,写myid文件内容修改为:2
6.进入s203主机,写myid文件内容修改为:3
7.启动zookeeper(m201,s202,s203,执行同样的操作)
/root/install/hadoop-0.20.2/zookeeper/bin/zkServer.sh start
/root/install/hadoop-0.20.2/zookeeper/bin/zkServer.sh stop(为停止)
安装hbase(m201中操作)
1.在/root/install/hadoop-0.20.2/中创建目录hbase
cd /root/install/hadoop-0.20.2
mkdir hbase
2.在/root/install目录中解压hbase
cd /root/install
tar -zxvf hbase-0.90.2.tar.gz
3.将hbase移动至/root/install/hadoop-0.20.2/hbase目录
cd /root/install/hbase-0.90.2
mv * /root/install/hadoop-0.20.2/hbase
4.配置hbase
1).配置/etc/profile文件,加入
export HBASE_HOME=/root/install/hadoop-0.20.2/hbase
export PATH=$PATH:$HBASE_HOME/bin
s202,s203,也执行一样的操作
执行/etc/profile使其生效
2).修改hbase-site.xml文件
cd /root/install/hadoop-0.20.2/hbase/conf
vi hbase-site.xml
在<configuration>中加入 :
<property>
<name>hbase.rootdir</name>
<value>hdfs://m201:9000/hasexx</value>
<description>The directory shared by region servers.</description>
</property>
<property>
<name>hbase.master.port</name>
<value>60000</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
<description>The mode the cluster will be in. Possible values are
false: standalone and pseudo-distributed setups with managed Zookeeper
true: fully-distributed with unmanaged Zookeeper Quorum (see hbase-env.sh)
</description>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/root/install/hadoop-0.20.2/zookeeper</value>
<description>Property from ZooKeeper's config zoo.cfg.
The directory where the snapshot is stored.
</description>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
<description>Property from ZooKeeper's config zoo.cfg.
The port at which the clients will connect.
</description>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>m201,s202,s203</value>
<description>Comma separated list of servers in the ZooKeeper Quorum.
For example, "host1.mydomain.com,host2.mydomain.com,host3.mydomain.com".
By default this is set to localhost for local and pseudo-distributed modes
of operation. For a fully-distributed setup, this should be set to a full
list of ZooKeeper quorum servers. If HBASE_MANAGES_ZK is set in hbase-env.sh
this is the list of servers which we will start/stop ZooKeeper on.
</description>
</property>
3).修改hbase-env.sh文件,加入
export JAVA_HOME=/usr/java/jdk1.6.0_23/
export HBASE_CLASSPATH=/root/install/hadoop-0.20.2/conf
export HBASE_MANAGES_ZK=false
4).复制zookeeper的zoo.cfg文件到/root/install/hadoop-0.20.2/conf目录中
cp /root/install/hadoop-0.20.2/zookeeper/conf/zoo.cfg /root/install/hadoop-0.20.2/conf/
5).修改regionservers文件,完整内容为:
192.168.0.202
192.168.0.203
6).将hadoop的hadoop-0.20.2-core.jar文复制到hbase的lib目录下,删除原来的hadoop-core-0.20-append-r1056497.jar文件
7).将/root/install/hadoop-0.20.2/hbase目录复制到s202,s203上
可使用scp -r 源主机:目标
5.启动服务
/root/install/hadoop-0.20.2/hbase/bin/start-hbase.sh
/root/install/hadoop-0.20.2/hbase/bin/stop-hbase.sh停止
http://192.168.0.201:60010/master.jsp
http://192.168.0.202:60030/regionserver.jsp
http://192.168.0.203:60030/regionserver.jsp
发表评论
-
mysql 定时任务
2015-11-03 09:57 730定时任务 查看event是否开启: show variabl ... -
tomcat服务器大数量数据提交Post too large解决办法
2015-10-29 11:05 699tomcat默认设置能接收HTTP POST请求的大小最大 ... -
Tomcat启动内存设置
2015-10-20 15:40 633Tomcat的启动分为startupo.bat启动和注册为w ... -
Java串口包Javax.comm的安装
2015-10-12 16:32 653安装个java的串口包安装了半天,一直找不到串口,现在终于搞 ... -
在 Java 应用程序中访问 USB 设备
2015-10-10 17:49 910介绍 USB、jUSB 和 JSR- ... -
自动生成Myeclipse7.5注册码
2015-08-11 16:46 433package com.rbt.action; impor ... -
mysql定时器
2015-08-04 14:01 5625.1以后可以使用 ALTER EVENT `tes ... -
oracle安装成功后,更改字符集
2015-07-23 11:53 595看了网上的文章,乱码有以下几种可能 1. 操作系统的字符集 ... -
利用html5调用本地摄像头拍照上传图片
2015-05-18 09:36 2568测试只有PC上可以,手机上不行 <!DOCTYPE ... -
必须Mark!最佳HTML5应用开发工具推荐
2015-05-15 22:50 924摘要:HTML5自诞生以来,作为新一代的Web标准,越来 ... -
Mobl试用二
2015-05-13 14:28 599最近有空又看了一下Mobl的一些说语法,备忘一下: 1 ... -
Nginx配置文件详细说明
2015-05-08 19:58 572在此记录下Nginx服务器nginx.conf的配置文件说明 ... -
axis调用cxf
2015-04-23 13:51 5151、写address时不用加?wsdl Service s ... -
mysql 获取第一个汉字首字母
2015-03-18 17:48 590select dmlb, dmz, dmsm1, CHAR ... -
failed to install Tomcat6 service解决办法
2015-02-12 09:20 494最近我重装了一下tomcat 6.0,可不知为什么,总是安装 ... -
tomcat 分配java内存
2015-02-11 10:37 556//首先检查程序有没有限入死循环 这个问题主要还是由这个问 ... -
[Android算法] Android蓝牙开发浅谈
2014-12-15 15:27 624对于一般的软件开发人 ... -
Android 内存溢出解决方案(OOM) 整理总结
2014-11-21 10:12 705原创作品,允许转载,转载时请务必以超链接形式标明文章 原始出 ... -
《HTML5从入门到精通》中文学习教程 PDF
2014-11-19 21:26 1086HTML5 草案的前身名为Web Applications ... -
mysql字符串函数(转载)
2014-11-13 12:05 533对于针对字符串位置的操作,第一个位置被标记为1。 AS ...
相关推荐
徐老师大数据培训Hadoop+HBase+ZooKeeper+Spark+Kafka+Scala+Ambari
hadoop集群配置流程以及用到的配置文件,hadoop2.8.4、hbase2.1.0、zookeeper3.4.12
Hadoop2.2+Zookeeper3.4.5+HBase0.96集群环境搭建
1、内容概要:Hadoop+Spark+Hive+HBase+Oozie+Kafka+Flume+Flink+Elasticsearch+Redash等大数据集群及组件搭建指南(详细搭建步骤+实践过程问题总结)。 2、适合人群:大数据运维、大数据相关技术及组件初学者。 3、...
亲手在Centos7上安装,所用软件列表 apache-flume-1.8.0-bin.tar.gz apache-phoenix-4.13.0-HBase-1.3-bin.tar.gz hadoop-2.7.4.tar.gz hbase-1.3.1-bin.tar.gz jdk-8u144-linux-x64.tar.gz kafka_2.12-1.0.0.tgz ...
Hadoop HA高可用集群搭建(Hadoop+Zookeeper+HBase)
redhat 6.4 hadoop2.7+hbase1.0+hive1.2+zookeeper3.4.6安装配置, ntp配置
数据仓库hadoop+zookeeper+hbase集群安装方法记录,自己搭建纯手写的记录。相关软件请自行下载
通过VirtualBox安装多台虚拟机,实现集群环境搭建。 优势:一台电脑即可。 应用场景:测试,学习。 注意事项:请严格按照文档操作,作者已经按照文档操作实现环境搭建。 内附百度网盘下载地址,有hadoop+zookeeper+...
Hadoop+ZooKeeper+HBase+hive(HQL)安装步骤
jdk1.8.0_131、apache-zookeeper-3.8.0、hadoop-3.3.2、hbase-2.4.12 mysql5.7.38、mysql jdbc驱动mysql-connector-java-8.0.8-dmr-bin.jar、 apache-hive-3.1.3 2.本文软件均安装在自建的目录/export/server/下 ...
hbase-0.90.5.tar.gz与hadoop0.20.2版本匹配,我在我本地虚拟机已经安装成功可以使用。请放心下载!!!
VMware10+CentOS6.5+Hadoop2.2+Zookeeper3.4.6+HBase0.96安装过程详解 用于解决分布式集群服务器
Hadoop+Zookeeper+Hbase+Hive部署
Hadoop+Zookeeper+HBase环境搭建,详细步骤和实例,从零开始搭建Hadoop集群
Hadoop2.6+HA+Zookeeper3.4.6+Hbase1.0.0 集群安装详细步骤
VMware10+CentOS6.5+Hadoop2.2+Zookeeper3.4.6+HBase0.96安装过程详解.pdf
大数据 hadoop spark hbase ambari全套视频教程(购买的付费视频)
从零开始hadoop+zookeeper+hbase+hive集群安装搭建,内附详细配置、测试、常见error等图文,按照文档一步一步搭建肯定能成功。(最好用有道云打开笔记)
Hadoop集群搭建必备安装包,包括zookeeper3.4.12+hbase1.4.4+sqoop1.4.7bin_hadoop-2.6.0+kafka2.10亲测可用。