搭建zookeeper-Kafka-Storm消息系统
2015-11-13 09:06
344 查看
1.搭建zookeeper
1.下载zookeeper二进制安装包 v3.4.6,下载地址:http://apache.fayea.com/zookeeper/zookeeper-3.4.6/zookeeper-3.4.6.tar.gz
2.解压,linux命令:
sudo tar -zxvf zookeeper-3.4.3.tar.gz
3.配置环境变量 ,linux命令:vi ~/.bashrc ,添加ZOOKEEPER_HOME
[code]export J***A_HOME=/usr/java/jdk1.8.0_60 export ZOOKEEPER_HOME=/opt/software/zookeeper-3.4.6 export STORM_HOMW=/opt/software/apache-storm-0.9.5 export PATH=$J***A_HOME/bin:$ZOOKEEPER_HOME/bin:$STORM_HOME/bin:$PATH
4.conf设置,dataDir、clientPort、最下面的server注意下.
记得手动创建Dir的目录,不然会启动失败。
[code][root@localhost conf]# cat zoo.cfg # The number of milliseconds of each tick tickTime=2000 # The number of ticks that the initial # synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. # do not use /tmp for storage, /tmp here is just # example sakes. dataDir=/var/zookeeper/data # the port at which the clients will connect clientPort=2181 # the maximum number of client connections. # increase this if you need to handle more clients #maxClientCnxns=60 # # Be sure to read the maintenance section of the # administrator guide before turning on autopurge. # # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance # # The number of snapshots to retain in dataDir #autopurge.snapRetainCount=3 # Purge task interval in hours # Set to "0" to disable auto purge feature #autopurge.purgeInterval=1 server.1=192.168.3.160:2888:3888 server.2=192.168.3.161:2888:3888 server.3=192.168.3.162:2888:3888
5.常用命令
zookeeper-3.4.6/bin/zkServer.sh {start|start-foreground|stop|restart|status|upgrade|print-cmd}
先start之后,status查看状态
[code][root@localhost zookeeper-3.4.6]# bin/zkServer.sh start JMX enabled by default Using config: /opt/software/zookeeper-3.4.6/bin/../conf/zoo.cfg Starting zookeeper ... STARTED [root@localhost software]# zookeeper-3.4.6/bin/zkServer.sh status JMX enabled by default Using config: /opt/software/zookeeper-3.4.6/bin/../conf/zoo.cfg Mode: standalone [root@localhost zookeeper-3.4.6]# bin/zkServer.sh stop JMX enabled by default Using config: /opt/software/zookeeper-3.4.6/bin/../conf/zoo.cfg Stopping zookeeper ... STOPPED
2.Kafka搭建
1.下载地址:http://apache.fayea.com/kafka/0.8.2.2/kafka_2.11-0.8.2.2.tgz2.解压,linux命令:
sudo tar -zxvf kafka_2.11-0.8.2.2.tgz
3.conf设置,注意host.name=192.168.3.160,zookeeper.connect=192.168.3.163:2181
[code]... ############################# Server Basics ############################# # The id of the broker. This must be set to a unique integer for each broker. broker.id=0 ############################# Socket Server Settings ############################# # The port the socket server listens on port=9092 # Hostname the broker will bind to. If not set, the server will bind to all interfaces host.name=192.168.3.160 ... ############################# Zookeeper ############################# # Zookeeper connection string (see zookeeper docs for details). # This is a comma separated host:port pairs, each corresponding to a zk # server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002". # You can also append an optional chroot string to the urls to specify the # root directory for all kafka znodes. zookeeper.connect=192.168.3.163:2181 # Timeout in ms for connecting to zookeeper zookeeper.connection.timeout.ms=6000
4.启动kafka server: nohup bin/kafka-server-start.sh config/server.properties >/dev/null 2>&1 &
此命令忽略hangup可以在远程连接关闭后继续运行。
kafka集群的多个broke连接到同一个zookeeper,生产者往一个broke发送消息,消费者从zookeeper获得订阅。
3.Storm搭建
1.下载地址:http://apache.fayea.com/storm/apache-storm-0.9.5/apache-storm-0.9.5.tar.gz2.解压,linux命令:
sudo tar -zxvf apache-storm-0.9.5.tar.gz
3.conf设置
[code]# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ########### These MUST be filled in for a storm configuration # storm.zookeeper.servers: # - "server1" # - "server2" storm.zookeeper.servers: - "192.168.3.161" storm.zookeeper.port: 2181 # # nimbus.host: "nimbus" # nimbus.host: "192.168.3.160" nimbus.childopts: -Xmx1024m -Djava.net.preferIPv4Stack=true ui.childopts: -Xmx768m -Djava.net.preferIPv4Stack=true ui.host: 0.0.0.0 ui.port: 8080 supervisor.childopts: -Djava.net.preferIPv4Stack=true worker.childopts: -Xmx768m -Dfile.encoding=utf-8 -Djava.net.preferIPv4Stack=true supervisor.slots.ports: - 6700 - 6701 - 6702 - 6703 storm.local.dir: /data/cluster/storm storm.log.dir: /data/cluster/storm/logs logviewer.port: 8000 # # ##### These may optionally be filled in: # ## List of custom serializations # topology.kryo.register: # - org.mycompany.MyType # - org.mycompany.MyType2: org.mycompany.MyType2Serializer # ## List of custom kryo decorators # topology.kryo.decorators: # - org.mycompany.MyDecorator # ## Locations of the drpc servers # drpc.servers: # - "server1" # - "server2" drpc.servers: - "192.168.3.160" ## Metrics Consumers # topology.metrics.consumer.register: # - class: "backtype.storm.metric.LoggingMetricsConsumer" # parallelism.hint: 1 # - class: "org.mycompany.MyMetricsConsumer" # parallelism.hint: 1 # argument: # - endpoint: "metrics-collector.mycompany.org"
supervisor.slots.ports: 对于每个Supervisor工作节点,需要配置该工作节点可以运行的worker数量。每个worker占用一个单独的端口用于接收消息,该配置选项即用于定义哪些端口是可被worker使用的。默认情况下,每个节点上可运行4个workers,分别在6700、6701、6702和6703端口.
4.常用命令
[code]# bin/storm nimbus < /dev/null 2<&1 & # bin/storm supervisor < /dev/null 2<&1 & # bin/storm ui < /dev/null 2<&1 & # #bin/storm jar storm-demo-1.0.jar io.sterm.demo.topology.WordCountTopology #bin/storm kill word-count
启动nimbus后台运行:bin/storm nimbus < /dev/null 2<&1 &
启动supervisor后台运行:bin/storm supervisor < /dev/null 2<&1 &
启动ui后台运行:bin/storm ui < /dev/null 2<&1 &
部分内容转自:
http://www.cnblogs.com/panfeng412/archive/2012/11/30/how-to-install-and-deploy-storm-cluster.html
相关文章推荐
- mongoclient findandmodify使用
- 让SQL自动增长的ID号从一个新的位置开始
- 读《大道至简》第七八章有感
- JavaScript 打印Div内容
- 源码推荐(11.13):AFN封装实时更新网络状态,快速集成展示新特性页面
- 第12周SHH数据结构-【项目1 图基本算法库】
- 第9周项目4-广义表算法库及应用(2)
- 第十一周项目1——二叉树算法验证(4)
- Cloud Design Pattern - Materialized View Pattern(物化视图模式)
- 欢迎使用CSDN-markdown编辑器
- usaco.section1.5 && 2.1
- 高德地图JS-API (超简单Get新技能√)
- websocket
- [国嵌攻略][054][NandFlash驱动设计_写]
- Maven 项目打包需要注意到的那点事儿
- sql2005查询字段名和说明
- 华为OJ题目(五):字符串最后一个单词的长度
- Linux性能评测工具之一:gprof篇
- JDK的具体安装
- Mysql配置及基础