您的位置:首页 > 运维架构 > Apache

【云星数据---Apache Flink实战系列(精品版)】:Apache Flink实战基础007--flink分布式部署002

2017-11-12 16:10 826 查看

二、flink在standalone模式主节点下无HA的部署实战

1.部署规划:



2.配置flink-conf.yaml文件

vim ${FLINK_HOME}/conf/flink-conf.yaml


添加内容:

在flink-conf.yaml文件中进行一些基本的配置,本此要修改的内容如下。

# The TaskManagers will try to connect to the JobManager on that host.
jobmanager.rpc.address: qingcheng11

# The heap size for the JobManager JVM
jobmanager.heap.mb: 1024

# The heap size for the TaskManager JVM
taskmanager.heap.mb: 1024

# The number of task slots that each TaskManager offers. Each slot runs one parallel pipeline.
taskmanager.numberOfTaskSlots: 4

# The parallelism used for programs that did not specify and other parallelism.
parallelism.default: 12

# You can also directly specify the paths to hdfs-default.xml and hdfs-site.xml
# via keys 'fs.hdfs.hdfsdefault' and 'fs.hdfs.hdfssite'.
fs.hdfs.hadoopconf: $HADOOP_HOME/etc/hadoop


3.配置slaves文件

此文件用于指定从节点,一行一个节点.

vim ${FLINK_HOME}/conf/slaves


添加内容:

在slaves文件中添加如下内容,表示集群的taskManager.

qingcheng11
qingcheng12
qingcheng13


4.分发配置文件

scp -r ${FLINK_HOME}/conf/*  qingcheng12:${FLINK_HOME}/conf/
scp -r ${FLINK_HOME}/conf/*  qingcheng13:${FLINK_HOME}/conf/


5.启动flink服务

${FLINK_HOME}/bin/start-cluster.sh




6.验证flink服务

6.1查看进程验证flink服

在所有机器上执行,可以看到各自对应的进程名称。

jps


6.2查看flink的web界面验证服务

http://qingcheng11:8081


flink cluster情况:



Job Manager情况:



Task Manager情况:



可以看出flink集群的整体情况。说明flink在standalone模式下主节点无HA的部署实战是成功的。

7.flink的常用命令

1.启动集群
${FLINK_HOME}/bin/start-cluster.sh

2.关闭集群
${FLINK_HOME}/bin/stop-cluster.sh

3.启动Scala-shell
${FLINK_HOME}/bin/start-scala-shell.sh remote qingcheng11 6123

4.开启jobmanager
${FLINK_HOME}/bin/jobmanager.sh start

5.关闭jobmanager
${FLINK_HOME}/bin/jobmanager.sh stop

6.开启taskmanager
${FLINK_HOME}/bin/taskmanager.sh start

7.关闭taskmanager
${FLINK_HOME}/bin/taskmanager.sh stop
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: 
相关文章推荐