[置顶] Jstorm安装与集群环境搭建(storm集群配置类似)
2017-11-11 21:12
721 查看
上篇文章介绍了zookeeper集群环境的搭建传送门,接下来这篇文章主要介绍jstorm安装与集群环境的配置以及jstorm ui配置。
jstorm最新版本为2.2.1 下载地址传送门。本文将采用192.168.72.140,141,142作为zookeeper集群服务器,192.168.72.151,152,153作为jstorm集群服务器,其中151作为master,UI服务器,接下来进入本文主题部分。
一、环境准备
1.配置主机名和映射地址
151上执行
2.在跟目录创建jstorm文件夹,用于存放所有jstorm相关文件.
3.解压jstorm并拷贝到jstorm文件夹下
4.在/jstorm/jstorm-2.2.1/目录下创建jstorm_data目录
5.配置jstorm环境变量
重启配置文件使之生效
6.创建jstorm_data文件夹用于保存运行时产生的数据
7.备份storm.yaml文件
8.编辑storm.yaml文件
说明:其中需要修改的配置为
上面配置的是zookeeper服务器
标识配置jstorm的一些端口,一般为默认,只需要打开注释即可。
jstorm主服务位置配置
配置JVM否则在启动UI的时候会报内存方面错误
这段代码是用于配置 Jstorm UI监控,只需要在UI服务器上进行配置,在该示例中151作为jstorm nimbus和UI的服务器,slave无需配置上面这段代码。执行到这一步jstorm集群配置基本完毕(从节点).
对于主节点还需要执行一下脚本,而且每次变更都需要执行第二句话
mkdir ~/.jstorm
cp -f $JSTORM_HOME/conf/storm.yaml ~/.jstorm
二、启动Jstorm UI
将jstorm目录下jstrom-ui-2.2.1.war拷贝到自己tomcat下
执行
启动tomcat,看到如下界面说明UI配置成功
看到下面界面说明Jstorm集群配置成功
jstorm最新版本为2.2.1 下载地址传送门。本文将采用192.168.72.140,141,142作为zookeeper集群服务器,192.168.72.151,152,153作为jstorm集群服务器,其中151作为master,UI服务器,接下来进入本文主题部分。
一、环境准备
1.配置主机名和映射地址
151上执行
hostname jstorm-master vim /etc/hosts 192.168.72.140 zookeeper-master 192.168.72.141 zookeeper-slave1 192.168.72.142 zookeeper-slave2 192.168.72.151 jstorm-master 192.168.72.152 jstorm-slave1 192.168.72.153 jstorm-slave2
2.在跟目录创建jstorm文件夹,用于存放所有jstorm相关文件.
mkdir /jstorm
3.解压jstorm并拷贝到jstorm文件夹下
cp -r jstorm-2.2.1 /jstorm/
4.在/jstorm/jstorm-2.2.1/目录下创建jstorm_data目录
mkdir /jstorm/jstorm-2.2.1/jstorm_data
5.配置jstorm环境变量
echo 'export JSTORM_HOME=/jstorm/jstorm-2.2.1' >> ~/.bashrc echo 'export PATH=$PATH:$JSTORM_HOME/bin' >> ~/.bashrc
重启配置文件使之生效
source ~/.bashrc
6.创建jstorm_data文件夹用于保存运行时产生的数据
mkdir -p /jstorm/jstorm-2.2.1/jstorm_data
7.备份storm.yaml文件
cp /jstorm/jstorm-2.2.1/conf/storm.yaml /jstorm/jstorm-2.2.1/conf/storm.yaml.back
8.编辑storm.yaml文件
########### These MUST be filled in for a storm configuration storm.zookeeper.servers: - "192.168.72.142" - "192.168.72.141" - "192.168.72.140" storm.zookeeper.root: "/jstorm" nimbus.host: "192.168.72.151" # cluster.name: "default" #nimbus.host/nimbus.host.start.supervisor is being used by $JSTORM_HOME/bin/start.sh #it only support IP, please don't set hostname # For example # nimbus.host: "10.132.168.10, 10.132.168.45" #nimbus.host.start.supervisor: false # %JSTORM_HOME% is the jstorm home directory storm.local.dir: "/jstorm/jstorm-2.2.1/jstorm_data" # please set absolute path, default path is JSTORM_HOME/logs # jstorm.log.dir: "absolute path" # java.library.path: "/usr/local/lib:/opt/local/lib:/usr/lib" nimbus.childopts: "-Xms1g -Xmx1g -Xmn512m -XX:SurvivorRatio=4 -XX:MaxTenuringThreshold=15 -XX:+UseConcMarkSweepGC -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=70 -XX:+HeapDumpOnOutOfMemoryError -XX:CMSMaxAbortablePrecleanTime=5000" # if supervisor.slots.ports is null, # the port list will be generated by cpu cores and system memory size # for example, # there are cpu_num = system_physical_cpu_num/supervisor.slots.port.cpu.weight # there are mem_num = system_physical_memory_size/(worker.memory.size * supervisor.slots.port.mem.weight) # The final port number is min(cpu_num, mem_num) # supervisor.slots.ports.base: 6800 # supervisor.slots.port.cpu.weight: 1.2 # supervisor.slots.port.mem.weight: 0.7 # supervisor.slots.ports: null supervisor.slots.ports: - 6800 - 6801 - 6802 - 6803 # Default disable user-define classloader # If there are jar conflict between jstorm and application, # please enable it # topology.enable.classloader: false # enable supervisor use cgroup to make resource isolation # Before enable it, you should make sure: # 1. Linux version (>= 2.6.18) # 2. Have installed cgroup (check the file's existence:/proc/cgroups) # 3. You should start your supervisor on root # You can get more about cgroup: # http://t.cn/8s7nexU # supervisor.enable.cgroup: false ### Netty will send multiple messages in one batch ### Setting true will improve throughput, but more latency # storm.messaging.netty.transfer.async.batch: true ### default worker memory size, unit is byte # worker.memory.size: 2147483648 # Metrics Monitor # topology.performance.metrics: it is the switch flag for performance # purpose. When it is disabled, the data of timer and histogram metrics # will not be collected. # topology.alimonitor.metrics.post: If it is disable, metrics data # will only be printed to log. If it is enabled, the metrics data will be # posted to alimonitor besides printing to log. # topology.performance.metrics: true # topology.alimonitor.metrics.post: false # UI MultiCluster # Following is an example of multicluster UI configuration ui.clusters: - { name: "jstorm", zkRoot: "/jstorm", zkServers: [ "192.168.72.140","192.168.72.141","192.168.72.142"], zkPort: 2181, }
说明:其中需要修改的配置为
storm.zookeeper.servers: - "192.168.72.142" - "192.168.72.141" - "192.168.72.140"
上面配置的是zookeeper服务器
supervisor.slots.ports: - 6800 - 6801 - 6802 - 6803
标识配置jstorm的一些端口,一般为默认,只需要打开注释即可。
jstorm主服务位置配置
nimbus.host: "192.168.72.151"
nimbus.childopts: "-Xms1g -Xmx1g -Xmn512m -XX:SurvivorRatio=4 -XX:MaxTenuringThreshold=15 -XX:+UseConcMarkSweepGC -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=70 -XX:+HeapDumpOnOutOfMemoryError -XX:CMSMaxAbortablePrecleanTime=5000"
配置JVM否则在启动UI的时候会报内存方面错误
ui.clusters: - { name: "jstorm", zkRoot: "/jstorm", zkServers: [ "192.168.72.140","192.168.72.141","192.168.72.142"], zkPort: 2181, }
这段代码是用于配置 Jstorm UI监控,只需要在UI服务器上进行配置,在该示例中151作为jstorm nimbus和UI的服务器,slave无需配置上面这段代码。执行到这一步jstorm集群配置基本完毕(从节点).
对于主节点还需要执行一下脚本,而且每次变更都需要执行第二句话
mkdir ~/.jstorm
cp -f $JSTORM_HOME/conf/storm.yaml ~/.jstorm
二、启动Jstorm UI
将jstorm目录下jstrom-ui-2.2.1.war拷贝到自己tomcat下
执行
mkdir ~/.jstorm cp -f $JSTORM_HOME/conf/storm.yaml ~/.jstorm
启动tomcat,看到如下界面说明UI配置成功
看到下面界面说明Jstorm集群配置成功
相关文章推荐
- 实时数据处理环境搭建flume+kafka+storm:4.storm安装配置
- Hadoop2.7.2 Centos 完全分布式集群环境搭建 (2) - Hadoop安装与配置(完全分布式)
- Hadoop 2.6 集群搭建从零开始之4 Hadoop的安装与配置(完全分布式环境)
- hadoop - hadoop2.6 分布式 - 集群环境搭建 - JDK安装配置和SSH安装配置与免密码登陆(集群中)
- 002_如何安装JDK以及Hadoop集群环境搭建_2台机都要配置一致
- Centos 7.2 安装 Ambari 2.2.2 + HDP 2.4.2 搭建Hadoop集群的环境配置
- 基于CentOS与VmwareStation10搭建Oracle11G RAC 64集群环境:4.安装Oracle RAC FAQ-4.6.重新配置与缷载11R2 Grid Infrastructure
- 基于CentOS与VmwareStation10搭建Oracle11G RAC 64集群环境:3.安装Oracle RAC-3.1.安装并配置ASM驱动
- hadoop集群环境搭建之安装配置hadoop集群
- hadoop - hadoop2.6 分布式 - 集群环境搭建 - Hadoop 2.6 分布式安装配置与启动
- Hadoop 2.6 集群搭建从零开始之3 Hadoop的安装与配置(伪分布式环境)
- storm0.9.1 集群环境安装配置
- [置顶] CentOS7.0安装配置Storm集群
- [置顶] ElasticSearch-5.3.1集群环境搭建,安装ElasticSearch-head插件,安装错误解决
- hadoop集群环境安装中的hosts 配置问题
- kafka和storm集群的环境安装
- Android系统编译环境搭建—Ubuntu14.04安装&root登录&Chromium root启动&安装配置交叉编译环境
- 01-03.Java基础windows下搭建环境-安装编程工具 MyEclipse 安装与配置