您的位置:首页 > 其它

PUPPET安装配置MCollective+ActiveMQ——实际部署案例

2016-10-19 16:11 423 查看

序:

   
基于Puppet安装完成后,安装配置MCollective+ActiveMQ,本篇文章适合直接上手操作,提供详细命令复制执行即可。

    关于部署的详细理论指导,可以查看本博客中《PUPPET安装配置MCollective+ActiveMQ——详细指导》篇章。

实际部署案例

(一)、部署环境简介

OS

RHEL6.3/RHEL7.1 适用于(linux6.x、7.x)

Architecture

1个puppet master + n个puppet agent

Puppet version

3.7.5 适用于(3.x)

Ruby version

1.8.7

Puppet master

puppetmaster.puppet.com puppetmaster

Puppet agents

agentx.puppet.com(agent1.puppet.com)

Puppet repo

已安装且可缓存安装包(参考安装puppet的文档)

Servers

指的是所有装MCollective的机器(master+agents)

Clients

指的是要连接MCollective的机器的admin user的机器(master)

Puppet master port

8161 61614 8140 61613 443 123 22 21 80 53

(二)、部署规划简介

在puppet的master机器上部署ActiveMQ+MCollective

在puppet的agent机器上部署MCollective

在puppet的master+agent机器上进行MCollective server配置

在puppet的master机器上进行MCollective
Client配置

(三)、部署具体步骤

Step 1: 创建和收集认证文件

1、准备ActiveMQ用户/密码

mcollective/Guosir@eu2015

2、在puppet master机器上用CA生成共享的server认证文件

sudo puppet cert generate mcollective-servers

3、在puppet master机器上用CA生成client的认证文件

sudo puppet cert generate padmin-mcollective-client

#上面的padmin-mcollective-client是对应step 5中创建的用户padmin,看起来规范好看些。

Step 2: 安装和配置中间件

1、安装ActiveMQ
sudo yum install activemq

2、安装java_ks模块

sudo puppet module install puppetlabs/java_ks

#如果主机连不上forge,则下载后执行如下命令

puppet module install puppetlabs-java_ks-1.3.1.tar.gz --ignore-dependencies

3、创建activemq模块

mkdir -p /etc/puppet/modules/activemq/manifests/

mkdir -p /etc/puppet/modules/activemq/files/

4、拷贝认证文件到activemq模块目录下并别名(给keystores.pp文件用)

cp /var/lib/puppet/ssl/certs/ca.pem /etc/puppet/modules/activemq/files/ca.pem

cp /var/lib/puppet/ssl/certs/puppetmaster.puppet.com.pem /etc/puppet/modules/activemq/files/cert.pem

cp /var/lib/puppet/ssl/private_keys/puppetmaster.puppet.com.pem /etc/puppet/modules/activemq/files/private_key.pem

chmod +r /etc/puppet/modules/activemq/files/private_key.pem

#要加可读权限不然后面同步会失败,同步完后再将文件设置为640或600状态,安全考虑

5、编写keystore.pp文件和init.pp文件

vim /etc/puppet/modules/activemq/manifests/keystores.pp

直接使用此文件,详细修改记录查看下面

# /etc/puppet/modules/activemq/manifests/keystores.pp
class activemq::keystores (
$keystore_password = 'puppet', # required

# User must put these files in the module, or provide other URLs
$ca = 'puppet:///modules/activemq/ca.pem',
$cert = 'puppet:///modules/activemq/cert.pem',
$private_key = 'puppet:///modules/activemq/private_key.pem',

$activemq_confdir = '/etc/activemq',
$activemq_user = 'activemq',
) {

# ----- Restart ActiveMQ if the SSL credentials ever change       -----
# ----- Uncomment if you are fully managing ActiveMQ with Puppet. -----

# Package['activemq'] -> Class[$title]
# Java_ks['activemq_cert:keystore'] ~> Service['activemq']
# Java_ks['activemq_ca:truststore'] ~> Service['activemq']

# ----- Manage PEM files -----

File {
owner => root,
group => root,
mode  => 0600,
}
file {"${activemq_confdir}/ssl_credentials":
ensure => directory,
mode   => 0700,
}
file {"${activemq_confdir}/ssl_credentials/activemq_certificate.pem":
ensure => file,
source => $cert,
}
file {"${activemq_confdir}/ssl_credentials/activemq_private.pem":
ensure => file,
source => $private_key,
}
file {"${activemq_confdir}/ssl_credentials/ca.pem":
ensure => file,
source => $ca,
}

# ----- Manage Keystore Contents -----

# Each keystore should have a dependency on the PEM files it relies on.

# Truststore with copy of CA cert
java_ks { 'activemq_ca:truststore':
ensure       => latest,
certificate  => "${activemq_confdir}/ssl_credentials/ca.pem",
target       => "${activemq_confdir}/truststore.jks",
password     => $keystore_password,
trustcacerts => true,
require      => File["${activemq_confdir}/ssl_credentials/ca.pem"],
}

# Keystore with ActiveMQ cert and private key
java_ks { 'activemq_cert:keystore':
ensure       => latest,
certificate  => "${activemq_confdir}/ssl_credentials/activemq_certificate.pem",
private_key  => "${activemq_confdir}/ssl_credentials/activemq_private.pem",
target       => "${activemq_confdir}/keystore.jks",
password     => $keystore_password,
require      => [
File["${activemq_confdir}/ssl_credentials/activemq_private.pem"],
File["${activemq_confdir}/ssl_credentials/activemq_certificate.pem"]
],
}

# ----- Manage Keystore Files -----

# Permissions only.
# No ensure, source, or content.

file {"${activemq_confdir}/keystore.jks":
owner   => $activemq_user,
group   => $activemq_user,
mode    => 0600,
require => Java_ks['activemq_cert:keystore'],
}
file {"${activemq_confdir}/truststore.jks":
owner   => $activemq_user,
group   => $activemq_user,
mode    => 0600,
require => Java_ks['activemq_ca:truststore'],
}

}

vim init.pp

class activemq{
include keystores
}


6、分配class给ActiveMQ主机节点(即puppet master机器)

vim /etc/puppet/manifests/site.pp

#$puppetserver = 'puppetmaster.puppet.com' #设置全局变量
node 'puppetmaster.puppet.com'{
include activemq
}


7、执行master机器上的puppet run

puppet agent -t -d

chmod 600 /etc/puppet/modules/activemq/files/private_key.pem

vim /etc/puppet/manifests/site.pp 

#$puppetserver = 'puppetmaster.puppet.com' #设置全局变量
node 'puppetmaster.puppet.com'{
# include activemq
}


#一次性的动作注释掉即可,因为private_key.pem设为600后,继续run会报权限的错

必须注意:查看/etc/activemq/keystore.jks和truststore.jks文件有没有生成,如果没有生成则按如下方式手动生成,下面的命令有涉及到输入密码的统一输入:puppet

cd /etc/activemq/ssl_credentials

sudo keytool -import -alias "My CA" -file ca.pem -keystore truststore.jks

sudo cat activemq_private.pem activemq_certificate.pem > temp.pem

sudo openssl pkcs12 -export -in temp.pem -out activemq.p12 -name puppetmaster.puppet.com

sudo keytool -importkeystore  -destkeystore keystore.jks -srckeystore activemq.p12 -srcstoretype PKCS12 -alias puppetmaster.puppet.com

查看执行结果,看是否有生成了两个文件truststore.jks和keystore.jks

sudo keytool -list -keystore truststore.jks

sudo keytool -list -keystore keystore.jks

拷贝文件到activemq.xml同级目录

mv /etc/activemq/ssl_credentials/keystore.jks /etc/activemq/

mv /etc/activemq/ssl_credentials/truststore.jks /etc/activemq/

mv /etc/activemq/ssl_credentials/activemq.p12 /etc/activemq/

rm /etc/activemq/ssl_credentials/temp.pem

chmod 600 /etc/activemq/keystore.jks

chmod 600/etc/activemq/truststore.jks

chmod 600 /etc/activemq/activemq.p12

chown activemq:activemq/etc/activemq/keystore.jks

chown activemq:activemq/etc/activemq/truststore.jks

chown activemq:activemq/etc/activemq/activemq.p12

8、配置activemq.xml

vim /etc/activemq/activemq.xml

直接使用此文件,详细修改记录查看附录(直接上传,如果复制粘贴,要注意格式是否正确)

<beans
xmlns="http://www.springframework.org/schema/beans"
xmlns:amq="http://activemq.apache.org/schema/core"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd http://activemq.apache.org/camel/schema/spring http://activemq.apache.org/camel/schema/spring/camel-spring.xsd"> 
<bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="locations">
<value>file:${activemq.base}/conf/credentials.properties</value>
</property>
</bean>

<!--
For more information about what MCollective requires in this file,
see http://docs.puppetlabs.com/mcollective/deploy/middleware/activemq.html -->

<!--
WARNING: The elements that are direct children of <broker> MUST BE IN
ALPHABETICAL ORDER. This is fixed in ActiveMQ 5.6.0, but affects
previous versions back to 5.4. https://issues.apache.org/jira/browse/AMQ-3570 -->
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="localhost" useJmx="true" schedulePeriodForDestinationPurge="60000">
<!--
MCollective generally expects producer flow control to be turned off.
It will also generate a limitless number of single-use reply queues,
which should be garbage-collected after about five minutes to conserve
memory.

For more information, see: http://activemq.apache.org/producer-flow-control.html -->
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry topic=">" producerFlowControl="false"/>
<policyEntry queue="*.reply.>" gcInactiveDestinations="true" inactiveTimoutBeforeGC="300000" />
</policyEntries>
</policyMap>
</destinationPolicy>

<managementContext>
<managementContext createConnector="false"/>
</managementContext>

<plugins>
<statisticsBrokerPlugin/>

<!--
This configures the users and groups used by this broker. Groups
are referenced below, in the write/read/admin attributes
of each authorizationEntry element.
-->
<simpleAuthenticationPlugin>
<users>
<authenticationUser username="mcollective" password="Guosir@eu2015" groups="mcollective,everyone"/>
<authenticationUser username="admin" password="Guosir@eu2015" groups="mcollective,admins,everyone"/>
</users>
</simpleAuthenticationPlugin>

<!--
Configure which users are allowed to read and write where. Permissions
are organized by group; groups are configured above, in the
authentication plugin.

With the rules below, both servers and admin users belong to group
mcollective, which can both issue and respond to commands. For an
example that splits permissions and doesn't allow servers to issue
commands, see: http://docs.puppetlabs.com/mcollective/deploy/middleware/activemq.html#detailed-restrictions -->
<authorizationPlugin>
<map>
<authorizationMap>
<authorizationEntries>
<authorizationEntry queue=">" write="admins" read="admins" admin="admins" />
<authorizationEntry topic=">" write="admins" read="admins" admin="admins" />
<authorizationEntry topic="mcollective.>" write="mcollective" read="mcollective" admin="mcollective" />
<authorizationEntry queue="mcollective.>" write="mcollective" read="mcollective" admin="mcollective" />
<!--
The advisory topics are part of ActiveMQ, and all users need access to them.
The "everyone" group is not special; you need to ensure every user is a member.
-->
<authorizationEntry topic="ActiveMQ.Advisory.>" read="everyone" write="everyone" admin="everyone"/>
</authorizationEntries>
</authorizationMap>
</map>
</authorizationPlugin>
</plugins>

<!--
The systemUsage controls the maximum amount of space the broker will
use for messages. For more information, see: http://docs.puppetlabs.com/mcollective/deploy/middleware/activemq.html#memory-and-temp-usage-for-messages-systemusage -->
<systemUsage>
<systemUsage>
<memoryUsage>
<memoryUsage limit="20 mb"/>
</memoryUsage>
<storeUsage>
<storeUsage limit="1 gb" name="foo"/>
</storeUsage>
<tempUsage>
<tempUsage limit="100 mb"/>
</tempUsage>
</systemUsage>
</systemUsage>

<sslContext>
<sslContext
keyStore="/etc/activemq/ke
fc9c
ystore.jks" keyStorePassword="puppet"
trustStore="/etc/activemq/truststore.jks" trustStorePassword="puppet"
/>
</sslContext>

<!--
The transport connectors allow ActiveMQ to listen for connections over
a given protocol. MCollective uses Stomp, and other ActiveMQ brokers
use OpenWire. You'll need different URLs depending on whether you are
using TLS. For more information, see:
 http://docs.puppetlabs.com/mcollective/deploy/middleware/activemq.html#transport-connectors -->
<transportConnectors>
<transportConnector name="openwire" uri="tcp://0.0.0.0:61616"/>
<!-- <transportConnector name="stomp+nio" uri="stomp+nio://0.0.0.0:61613"/> -->
<!-- If using TLS, uncomment this and comment out the previous connector: -->
<transportConnector name="stomp+nio+ssl" uri="stomp+nio+ssl://0.0.0.0:61614?needClientAuth=true&transport.enabledProtocols=TLSv1,TLSv1.1,TLSv1.2"/>
</transportConnectors>

</broker>

<!--
Enable web consoles, REST and Ajax APIs and demos.
It also includes Camel (with its web console); see ${ACTIVEMQ_HOME}/conf/camel.xml for more info.

See ${ACTIVEMQ_HOME}/conf/jetty.xml for more details.
-->
<import resource="jetty.xml"/>
</beans>

9、启动或重启ActiveMQ服务

service activemq restart

10、agent机器配置防火墙

vim /etc/sysconfig/iptables,添加如下行

-A INPUT -m state --state NEW -m tcp -p tcp --dport 61614 -j ACCEPT

Step 3: 安装MCollective

1、master和agent机器都安装mcollective包

yum install mcollective

2、master机器上安装mcollective-client包

yum install mcollective-client

3、设置开机启动mcollective服务

rhel6

service mcollective status        #查看服务是否正在运行

chmod +x mcollective                     #增加执行权限
chkconfig --add mcollective             #把mcollective添加到系统服务列表
chkconfig mcollective on                 #设定mcollective的开关(on/off)
chkconfig --list mcollective               #就可以看到已经注册了mcollective的服务

rhel7

systemctl status mcollective.service       #显示一个服务的状态

systemctl enable mcollective.service      #在开机时启用一个服务

systemctl is-enabled mcollective.service   #查看服务是否开机启动

systemctl list-unit-files|grep enabled      #查看已启动的服务列表

Step 4: 配置servers

1、创建mcollective模块

mkdir -p /etc/puppet/modules/mcollective/manifests/

mkdir -p /etc/puppet/modules/mcollective/files/

2、拷贝mcollective-servers和ca的认证文件到模块目录下并别名

cp /var/lib/puppet/ssl/public_keys/mcollective-servers.pem /etc/puppet/modules/mcollective/files/server_public.pem

cp /var/lib/puppet/ssl/private_keys/mcollective-servers.pem /etc/puppet/modules/mcollective/files/server_private.pem

cp /var/lib/puppet/ssl/certs/mcollective-servers.pem /etc/puppet/modules/mcollective/files/server_cert.pem

cp /var/lib/puppet/ssl/certs/ca.pem /etc/puppet/modules/mcollective/files/server_cacert.pem

3、创建模板目录及模板文件server.cfg.erb

mkdir -p /etc/puppet/modules/mcollective/templates/

vim /etc/puppet/modules/mcollective/templates/server.cfg.erb

直接使用此文件,详细修改记录查看附录

<% ssldir = '/var/lib/puppet/ssl' %>
# /etc/mcollective/server.cfg

# ActiveMQ connector settings:
connector = activemq
direct_addressing = 1
plugin.activemq.pool.size = 1
plugin.activemq.pool.1.host = puppetmaster.puppet.com
plugin.activemq.pool.1.port = 61614
plugin.activemq.pool.1.user = mcollective
plugin.activemq.pool.1.password = Guosir@eu2015
plugin.activemq.pool.1.ssl = 1
plugin.activemq.pool.1.ssl.ca = <%= ssldir %>/certs/ca.pem
plugin.activemq.pool.1.ssl.cert = <%= ssldir %>/certs/<%= scope.lookupvar('::clientcert') %>.pem
plugin.activemq.pool.1.ssl.key = <%= ssldir %>/private_keys/<%= scope.lookupvar('::clientcert') %>.pem
plugin.activemq.pool.1.ssl.fallback = 0

# SSL security plugin settings:
securityprovider = ssl
plugin.ssl_client_cert_dir = /etc/mcollective/ssl/clients
plugin.ssl_server_private = /etc/mcollective/ssl/server_private.pem
plugin.ssl_server_public = /etc/mcollective/ssl/server_public.pem

# Facts, identity, and classes:
identity = <%= scope.lookupvar('::fqdn') %>
factsource = yaml
plugin.yaml = /etc/mcollective/facts.yaml
classesfile = /var/lib/puppet/state/classes.txt

# No additional subcollectives:
collectives = mcollective
main_collective = mcollective

# Registration:
# We don't configure a listener, and only send these messages to keep the
# Stomp connection alive. This will use the default "agentlist" registration
# plugin.
registerinterval = 600

# Auditing (optional):
# If you turn this on, you must arrange to rotate the log file it creates.
rpcaudit = 1
rpcauditprovider = logfile
plugin.rpcaudit.logfile = /var/log/mcollective-audit.log

# Authorization:
# If you turn this on now, you won't be able to issue most MCollective
# commands, although `mco ping` will work. You should deploy the
# ActionPolicy plugin before uncommenting this; see "Deploy Plugins" below.

# rpcauthorization = 1
# rpcauthprovider = action_policy
# plugin.actionpolicy.allow_unconfigured = 1

# Logging:
logger_type = file
loglevel = info
logfile = /var/log/mcollective.log
keeplogs = 5
max_log_size = 2097152
logfacility = user

# Platform defaults:
# These settings differ based on platform; the default config file created by
# the package should include correct values. If you are managing settings as
# resources, you can ignore them, but with a template you'll have to account
# for the differences.
<% if scope.lookupvar('::osfamily') == 'RedHat' -%>
libdir = /usr/libexec/mcollective
daemonize = 1
<% elsif scope.lookupvar('::osfamily') == 'Debian' -%>
libdir = /usr/share/mcollective/plugins
daemonize = 1
<% else -%>
# INSERT PLATFORM-APPROPRIATE VALUES FOR LIBDIR AND DAEMONIZE
<% end %>


4、创建server.pp和init.pp文件

vim /etc/puppet/modules/mcollective/manifests/server.pp

class mcollective::server (
$server_public = 'puppet:///modules/mcollective/server_public.pem',
$server_private = 'puppet:///modules/mcollective/server_private.pem',
$server_cert = 'puppet:///modules/mcollective/server_cert.pem',
$server_cacert = 'puppet:///modules/mcollective/server_cacert.pem',
) {
file {"/etc/mcollective/ssl/server_public.pem":
ensure => file,
source => $server_public,
}
file {"/etc/mcollective/ssl/server_private.pem":
ensure => file,
source => $server_private,
}
file {"/etc/mcollective/ssl/server_cert.pem":
ensure => file,
source => $server_cert,
}
file {"/etc/mcollective/ssl/server_cacert.pem":
ensure => file,
source => $server_cacert,
}
file{"/etc/mcollective/facts.yaml":
owner  => root,
group   => root,
mode   => 400,
loglevel  => debug, # reduce noise in Puppet reports
content => inline_template("<%= scope.to_hash.reject { |k,v| k.to_s =~ /(uptime_seconds|timestamp|free)/ }.to_yaml %>"), # exclude rapidly changing facts
}
file{"/etc/mcollective/server.cfg":
ensure => file,
content => template("mcollective/server.cfg.erb"),
}
}


vim init.pp

class mcollective {
include server
}


5、分配class给所有主机节点(添加如下内容)

vim /etc/puppet/manifests/site.pp

node default {
include mcollective
}


#注意node default是针对没有专门节点定义时才生效。

Step 5: 配置clients

1、在master机器上创建padmin用户和组(模仿了PE版puppet的peadmin,去掉了e)

sudo groupadd padmin

sudo useradd -d /var/lib/padmin -g padmin -s /bin/bash padmin

2、创建.mcollective.d目录

su - padmin

mkdir -p ~/.mcollective.d

3、创建client的认证文件

su - root

puppet cert generate padmin-mcollective-client #上面若已生成就无需重复生成

4、拷贝认证文件到.mcollective.d目录和模块目录下并别名

cp /var/lib/puppet/ssl/public_keys/padmin-mcollective-client.pem /var/lib/padmin/.mcollective.d/padmin_public.pem

cp /var/lib/puppet/ssl/private_keys/padmin-mcollective-client.pem /var/lib/padmin/.mcollective.d/padmin_private.pem

cp /var/lib/puppet/ssl/certs/padmin-mcollective-client.pem /var/lib/padmin/.mcollective.d/padmin_cert.pem

cp /var/lib/puppet/ssl/certs/ca.pem /var/lib/padmin/.mcollective.d/padmin_cacert.pem

cp /var/lib/puppet/ssl/public_keys/mcollective-servers.pem /var/lib/padmin/.mcollective.d/server_public.pem

cp /var/lib/puppet/ssl/public_keys/padmin-mcollective-client.pem /etc/puppet/modules/mcollective/files/padmin_public.pem

cd /var/lib/padmin/.mcollective.d

chown padmin:padmin *.pem

chmod 600 padmin_private.pem

5、创建client.pp和编辑init.pp文件

vim /etc/puppet/modules/mcollective/manifests/client.pp

class mcollective::client (
$padmin_public = 'puppet:///modules/mcollective/padmin_public.pem',
) {
file {"/etc/mcollective/ssl/clients":
ensure  => directory,
mode   => 0755,
before  => File['/etc/mcollective/ssl/clients/padmin_public.pem']
}
file {"/etc/mcollective/ssl/clients/padmin_public.pem":
ensure => file,
source => $padmin_public,
}
}


vim init.pp

class mcollective {
include server
include client
}


6、分配class给所有节点(不变,无需操作,已在配置server时include,涉及底下这串代码)

vim site.pp

node default {
include mcollective
}


7、创建.mcollective文件

vim/var/lib/padmin/.mcollective

chown padmin:padmin /var/lib/padmin/.mcollective

直接使用此文件内容,详细修改记录查看附录

# ~/.mcollective
# or
# /etc/mcollective/client.cfg

# ActiveMQ connector settings:
connector = activemq
direct_addressing = 1
plugin.activemq.pool.size = 1
plugin.activemq.pool.1.host = puppetmaster.puppet.com
plugin.activemq.pool.1.port = 61614
plugin.activemq.pool.1.user = mcollective
plugin.activemq.pool.1.password = Guosir@eu2015
plugin.activemq.pool.1.ssl = 1
plugin.activemq.pool.1.ssl.ca = /var/lib/padmin/.mcollective.d/padmin_cacert.pem
plugin.activemq.pool.1.ssl.cert = /var/lib/padmin/.mcollective.d/padmin_cert.pem
plugin.activemq.pool.1.ssl.key = /var/lib/padmin/.mcollective.d/padmin_private.pem
plugin.activemq.pool.1.ssl.fallback = 0

# SSL security plugin settings:
securityprovider = ssl
plugin.ssl_server_public = /var/lib/padmin/.mcollective.d/server_public.pem
plugin.ssl_client_private = /var/lib/padmin/.mcollective.d/padmin_private.pem
plugin.ssl_client_public = /var/lib/padmin/.mcollective.d/padmin_public.pem

# Interface settings:
default_discovery_method = mc
direct_addressing_threshold = 10
ttl = 60
color = 1
rpclimitmethod = first

# No additional subcollectives:
collectives = mcollective
main_collective = mcollective

# Platform defaults:
# These settings differ based on platform; the default config file created
# by the package should include correct values or omit the setting if the
# default value is fine.
libdir = /usr/libexec/mcollective
helptemplatedir = /etc/mcollective

# Logging:
logger_type = console
loglevel = warn


8、在每台MCollective服务器上重新运行puppet run

puppet agent --test

Step 6: 安装agent插件

因为ActionPolicy插件在国内用不太合适,有点小题大做,所以此处直接忽略。详细可以看文章前面提到的另外一篇博客的理论指导。
1、安装agent插件puppet

master机器

yum install mcollective-puppet-agent mcollective-puppet-common mcollective-puppet-client

agent机器

yum install mcollective-puppet-agent mcollective-puppet-common

2、安装agent插件service

master机器

yum install mcollective-service-agent mcollective-service-common mcollective-service-client

agent机器

yum install mcollective-service-agent mcollective-service-common

3、安装agent插件package

master机器

yum install mcollective-package-agent mcollective-package-common mcollective-package-client

agent机器

yum install mcollective-package-agent mcollective-package-common

4、安装agent插件shell #要注意关闭selinux,已造成过问题

master机器

yum install mcollective-shell-agent mcollective-shell-common mcollective-shell-client

agent机器

yum install mcollective-shell-agent mcollective-shell-common

5、重启所有机器上的mcollective服务

service mcollective restart

6、查看agent插件安装情况

su - padmin

mco rpc rpcutil agent_inventory -I puppetmaster.puppet.com

mco rpc rpcutil agent_inventory -I agent1.puppet.com

mco shell run --tail "echo 123"

mco shell run "date"

四、附录

如果需要了解上述的文件具体修改了哪些地方,可以参考博客中的《PUPPET安装配置MCollective+ActiveMQ——文档附录》篇章,专门标记。

keystores.pp文件
activemq.xml文件
server.cfg.erb文件
.mcollective文件
(完)
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息